Friday, May 19, 2017

Principal component analysis of state unemployment rates

One of my hobbies on this blog is to apply various principal component analyses (PCA) to economic data. For example, here's some jobs data by industry (more here). I am not saying this is original research (many economics papers have used PCA, but a quick Googling did not turn up this particular version).

Anyway, this is based on seasonally adjusted FRED data (e.g. here for WA) and I put the code up in the dynamic equilibrium repository. Here is all of the data along with the US unemployment rate (gray):


It's a basic Karhunen–Loève decomposition (Mathematica function here). Blue is the principal component (first principal component), and the rest of the components aren't as relevant. To a pretty good approximation, the business cycle in employment is a national phenomenon:


There's an overall normalization factor based on the fact that we have 50 states. We can see the first (blue) and second (yellow) components alongside the national unemployment rate (gray, right scale): 


Basically the principal component is the national business cycle. The second component is interesting as it suggests differences in different states based on the two big recessions of the past 40 years (1980s and the Great Recession) that go in opposite directions. The best description of this component is that it represents that some states did much worse in the 1980s and some states did a bit better in the 2000s (see the first graph of this post).

As happened before, the principal component is pretty well modeled by the dynamic equilibrium model (just like the national data):


The transitions (recession centers) are at 1981.0, 1991.0, 2001.7, 2008.8 and a positive shock at 2014.2. These are consistent with the national data transitions (1981.1, 1991.1, 2001.7, 2008.8 and 2014.4).

Wednesday, May 17, 2017

My article at Evonomics

I have an article up at Evonomics about the basics of information equilibrium looking at it from the perspective of Hayek's price mechanism and the potential for market failure. Consider this post a forum for discussion or critiques. I earlier put up a post with further reading and some slides linked here.

I also made up a couple of diagrams that I didn't end up using illustrating price changes:




Tuesday, May 16, 2017

Explore more about information equilibrium

Originally formulated by physicists Peter Fielitz and Guenter Borchardt for natural complex systems, information equilibrium [arXiv:physics.gen-ph] is a potentially useful framework for understanding many economic phenomena. Here are some additional resources:


A tour of information equilibrium
Slide presentation (51 slides)


Dynamic equilibrium and information equilibrium
Slide presentation (19 slides)


Maximum entropy and information theory approaches to economics
Slide presentation (27 slides)


Information equilibrium as an economic principle
Pre-print/working paper (44 pages)

Saturday, May 13, 2017

Theory and evidence in science versus economics


Noah Smith has a fine post on theory and evidence in economics so I suggest you read it. It is very true that there should be a combined approach:
In other words, econ seems too focused on "theory vs. evidence" instead of using the two in conjunction. And when they do get used in conjunction, it's often in a tacked-on, pro-forma sort of way, without a real meaningful interplay between the two. ... I see very few economists explicitly calling for the kind of "combined approach" to modeling that exists in other sciences - i.e., using evidence to continuously restrict the set of usable models.

This does assume the same definition of theory in economics and science, though. However there is a massive difference between "theory" in economics and "theory" in sciences. 

"Theory" in science

In science, "theory" generally speaking is the amalgamation of successful descriptions of empirical regularities in nature concisely packaged into a set of general principles that is sometimes called a framework. Theory for biology tends to stem from the theory of evolution which was empirically successful at explaining a large amount of the variation in species that had been documented by many people for decades. There is also the cell model. In geology you have plate tectonics that captures a lot of empirical evidence about earthquakes and volcanoes. Plate tectonics explains some of the fossil record as well (South America and Africa have some of the same fossils up to a point at which point they diverge because the continents split apart). In medicine, you have the germ theory of disease.

The quantum field theory framework is the most numerically precise amalgamation of empirical successes known to exist. But physics has been working with this kind of theory since the 1600s when Newton first came up with a concise set of principles that captured nearly all of the astronomical data about planets that had been recorded up to that point (along with Galileo's work on projectile motion).

But it is important to understand that the general usage of the word "theory" in the sciences is just shorthand for being consistent with past empirical successes. That's why string theory can be theory: it appears to be consistent with general relativity and quantum field theory and therefore can function as a kind of shorthand for the empirical successes of those theories ... at least in certain limits. This is not to say your new theoretical model will automatically be correct, but at least it doesn't obviously contradict Einstein's E = mc² or Newton's F = ma in the respective limits.

Theoretical biology (say, determining the effect of a change in habitat on a species) or theoretical geology (say, computing how the Earth's magnetic field changes) is similarly based on the empirical successes of biology and geology. These theories are then used to understand data and evidence and can be rejected if evidence contradicting them arises.

As an aside, experimental sciences (physics) have an advantage over observational ones (astronomy) in that the former can conduct experiments in order to extract the empirical regularities used to build theoretical frameworks. But even in experimental sciences, experiments might be harder to do in some fields than others. Everyone seems to consider physics the epitome of science, but in reality the only reason physics probably had a leg up in developing the first real scientific framework is that the necessary experiments required to observe the empirical regularities are incredibly easy to set up: a pendulum, some rocks, and some rolling balls and you're pretty much ready to experimentally confirm everything necessary to posit Newton's laws. In order to confirm the theory of evolution, you needed to collect species from around the world, breed some pigeons, and look at fossil evidence. That's a bit more of a chore than rolling a ball down a ramp.

"Theory" in economics

Theory in economics primarily appears to be solving utility maximization problems, but unlike science there does not appear to be any empirical regularity that is motivating that framework. Instead there are a couple of stylized facts that can be represented with the framework: marginalism and demand curves. However these stylized facts can also be represented with ... supply and demand curves. The question becomes what empirical regularity is described by utility maximization problems but not by supply and demand curves. Even the empirical work of Vernon Smith and John List can be described by supply and demand curves (in fact, at the link they can also be described by information equilibrium relationships).

Now there is nothing wrong with using utility maximization as a proposed framework. That is to say there's nothing wrong with positing any bit of mathematics as a potential framework for understanding and organizing empirical data. I've done as much with information equilibrium.

However the utility maximization "theory" in economics is not the same as "theory" in science. It isn't a shorthand for a bunch of empirical regularities that have been successfully described. It's just a proposed framework; it's mathematical philosophy.

The method of nascent science

This isn't necessarily bad, but it does mean that the interplay between theory and evidence reinforcing or refuting each other isn't the iterative process we need to be thinking about. I think a good analogy is an iterative algorithm. This algorithm produces a result that causes it to change some parameters or initial guess that is fed back into the same algorithm. This can converge to a final result if you start off close to it, but it requires your initial guess to be good. This is the case of science: the current state of knowledge is probably decent enough that the iterative process of theory and evidence will converge. You can think of this as the scientific method ... for established science.

For economics, it does not appear that the utility maximization framework is close enough to the "true theory" of economics for the method of established science to converge. What's needed is the scientific method that was used back when science first got its start. In a post from about a year ago, I called this the method of nascent science. That method was based around the different metric of usefulness rather than model rejection in established science. Here's a quote from that post:
Awhile ago, Noah Smith brought up the issue in economics that there are millions of theories and no way to reject them scientifically. And that's true! But I'm fairly sure we can reject most of them for being useless.


"Useless" is a much less rigorous and much broader category than "rejected". It also isn't necessarily a property of a single model on its own. If two independently useful models are completely different but are both consistent with the empirical data, then both models are useless. Because both models exist, they are useless. If one didn't [exist], the other would be useful.
Noah Smith (in the post linked at the beginning of this post) put forward three scenarios of theory and evidence in economics:
1. Some papers make structural models, observe that these models can fit (or sort-of fit) a couple of stylized facts, and call it a day. Economists who like these theories (based on intuition, plausibility, or the fact that their dissertation adviser made the model) then use them for policy predictions forever after, without ever checking them rigorously against empirical evidence. 
2. Other papers do purely empirical work, using simple linear models. Economists then use these linear models to make policy predictions ("Minimum wages don't have significant disemployment effects"). 
3. A third group of papers do empirical work, observe the results, and then make one structural model per paper to "explain" the empirical result they just found. These models are generally never used or seen again.
Using these categories, we can immediately say 1 & 3 are useless. If a model never checked rigorously against data or if a model is never seen again, they can't possibly be useful.

In this case, the theories represent at best mathematical philosophy (as I mentioned at the end of the previous section). It's not really theory in the (established) scientific sense.

But!

Mathematical Principles of Natural Philosophy

Sometimes a little bit of mathematical philosophy will have legs. Isaac Newton's work, when it was proposed, was mathematical philosophy. It says so right in the title. So there's nothing wrong with the proliferation of "theory" (by which we mean mathematical philosophy) in economics. But it shouldn't be treated as "theory" in the same sense of science. Most if it will turn out to be useless, which is fine if you don't take it seriously in the first place. And using economic "theory" for policy would be like using Descartes to build a mag-lev train ...



...

Update 15 May 2017: Nascent versus "soft" science

I made a couple of grammatical corrections and added a "does" and a "though" to the sentence after the first Noah Smith quote in my post above.

But I did also want to add the point that by "established science" vs "nascent science" I don't mean the same thing as many people mean when they say "hard science" vs "soft science". So-called "soft" sciences can be established or nascent. I think of economics as a nascent science (economies and many of the questions about them barely existed until modern nation states came into being). I also think that some portions will eventually become a "hard" science (e.g. questions about the dynamics of the unemployment rate), while others might become a "soft" science with the soft science pieces being consumed by sociology (e.g. questions about what makes a group of people panic or behave as they do in a financial crisis).

I wrote up a post that goes into that in more detail about a year ago. However, the main idea is that economics might be explicable -- as a hard science even -- in cases where the law of large numbers kicks in and agents do not highly correlate (where economics becomes more about the state space itself than the actions of agents in that state space ... Lee Smolin called this "statistical economics" in an analogy with statistical mechanics). 

I think for example psychology is an established soft science. Its theoretical underpinnings are in medicine and neuroscience. That's what makes the replication crisis in psychology a pretty big problem for the field. In economics, it's actually less of a problem (the real problem is not the replication issue, but that we should all be taking the econ studies less seriously than we take psychology studies).

Exobiology or exogeology could be considered nascent hard sciences. Another nascent hard science might be so-called "data science": we don't quite know how to deal with the huge amounts of data that are only recently available to us and the traditional ways we treat data in science may not be optimal.

Monday, May 8, 2017

Government spending and receipts: a dynamic equilibrium?

I was messing around with FRED data and noticed that the ratio of government expenditures to government receipts seems to show a dynamic equilibrium that matches up with the unemployment rate. Note this is government spending and income at all levels (federal + state + local). So I ran it through the model [1] and sure enough it works out:


Basically, the ratio of expenditures to receipts goes up during a recession (i.e. deficits increase at a faster rate) and down in the dynamic equilibrium outside of recessions (i.e. deficits increase at a slower rate or even fall). The dates of the shocks to this dynamic equilibrium match pretty closely with the dates for the shocks to unemployment (arrows).

This isn't saying anything ground-breaking: recessions lower receipts and increase use of social services (so expenditures over receipts will go up). It is interesting however that the (relative) rate of improvement towards budget balance is fairly constant from the 1960s to the present date ... independent of major fiscal policy changes. You might think that all the disparate changes in state and local spending is washing out the big federal spending changes, but in fact the federal component is the larger component so it is dominating the graph above. In fact, the data looks almost the same with the just the federal component (see result below). So we can strengthen the conclusion: the (relative) rate of improvement towards federal budget balance is fairly constant from the 1960s to the present date ... independent of major federal fiscal policy changes.


...

Footnotes:

[1] The underlying information equilibrium model is GE ⇄ GR (expenditures are in information equilibrium with receipts, except during shocks).

Friday, May 5, 2017

Dynamic equilibrium in employment-population ratio in OECD countries

John Handley asks on Twitter about whether the dynamic equilibrium model works for the unemployment-population ratio for other countries besides the US. So I re-ran the model on some of the shorter OECD time series available on FRED (most of them were short, and I could easily automate the procedure for time series of approximately the same length).

As with the US, some countries seem to be undergoing a "demographic transition" with women entering the workforce. Therefore most of the data sets are for men only. I just realized that I actually have both for Greece. These are all for 15-64 year olds, and cases where there was data for at least 2000-2017. Some of the series only go back to 2004 or 2005, which is really too short to be conclusive. I also left off the longer time series (to come later in an update) because it was easy to automate the model for time series of approximately the same length.

Anyway, the men-only model country list is: Denmark, Estonia, Greece, Ireland, Iceland, Italy, New Zealand, Portugal, Slovenia, Turkey, and South Africa. The men and women are included for: France, Greece (again), Poland, and Sweden. I searched FRED manually, so these are just the countries that came up.

Here are the results (some have 1 shock, some have 2):


What is interesting is that while the global financial crisis seems to often be conflated with the Greek debt crisis, the Greek debt crisis appears to hit much later (centered at 2011.2). For example, the recession in Iceland is centered at 2008.7 (about 2.5 years earlier, closer to the recession center for the US).

...

Update:

Here are the results for Australia, Canada, and Japan which have longer time series:



"You're wrong because I define it differently"

There is a problem in the econoblogosphere, especially among heterodox approaches, where practitioners do not recognize that their approach is non-standard. I'm not trying to single out commenter Peiya, but this comment thread is a teachable moment, and I thought my response had more general application. 

Peiya started off saying:
Many economic theories are based on wrong interpretation on accounting identities and underlying data semantics.
and went on to talk about a term called "NonG". In a response to my question about the definitions of "NonG", Peiya responded:
Traditional definition of the "income accounting identity" (C+I+G = C + S + T or S-I = G-T) is widely-misused with implicit assumption NonG = 0.
So Peiya was using a different definition. My response is what I wanted to promote to a blog post (with one change to link to Paul Romer's blog post on Feynman integrity where I realize the direct quote uses the word "leaning" rather than "bending"):
For the purposes of this blog, we'll stick to the traditional definition unless there is e.g. a model of empirical data that warrants a change of definition. Changing definitions of accounting identities and saying "Many economic theories are based on wrong interpretation on accounting identities" is a bit disingenuous. 
Imagine if I said you were wrong because I define accounting identities as statistical equilibrium potentials? I could say that there is no entropic force associated with your "nonG" term, therefore you have a wrong interpretation of the accounting identities. 
But I don't say that. And you shouldn't say that about the "traditional" definition of accounting identities unless you have a really good reason backed up with some peer-reviewed research or at least open presentations of that research. 
You must always try to "[bend] over backwards" to consider the fact that you might be wrong. Or at least note when you are considering some definition that is non-standard that it is in fact non-standard. In my link above, I admit the approach is speculative. I say "At least if [the equation presented] is a valid way to build an economy." I recognize that it is a non-standard definition of the accounting identities. 
Saying people misunderstand a definition and then presenting a non-standard version of that definition is not maintaining the necessary integrity for intellectual discussion and progress.
I've encountered this many times where people basically assume their own approach is a kind of null hypothesis and other people are wrong because they didn't use their definitions of their model. Even economists with Phds sometimes do this. However "You're wrong because I define it differently" is not a valid argument, and it's even worse if you just say "You're wrong" leaving off the part about the definition because you assume everyone is using your definition for some reason. The only people who can assume other people are using their definition are mainstream economists because that's the only way science and academia operates. The mainstream consensus is the default, and not recognizing the mainstream consensus or mainstream definitions is failing to lean over backwards and show Feynman integrity

Commenter maiko followed up with something that is also a teachable moment:
maybe by nature he is just harsher on confused post keynesians and more compliant with asylum inmates.
By "he" maiko is referring to me, and by "asylum inmates", maiko is referring to mainstream economists (at least I think so).

And yes, that's exactly right. At least when it comes to definitions. There are thousands of books and thousands of education programs in the world teaching the mainstream approach to economics. Therefore mainstream economic definitions are the default. If you want to deviate from them, that's fine. However, because the mainstream definitions are the default you need to 1) say you are deviating from them, and 2) have a really good reason for doing so (preferably because it allows you to explain some empirical data).

Update:

In my Tweet of this post, I said that in order to have academic integrity, you must recognize the academic consensus. This has applications far beyond the econoblogosphere and basically sums up the problem with Charles Murray (failing to have academic integrity because he fails to recognize that the academic consensus is that his research is flawed) as well as Bret Stephens in the New York Times (in a twitter argument) who not only failed to recognize the scientific consensus but actually put false statements in his OpEd.

Thursday, May 4, 2017

Labor force dynamic equilibrium

Employment data comes out tomorrow and I have some forecasts that will be "marked to market" (here's the previous update). If the unemployment rate continues to fall, then we're probably not seeing the leading edge of a recession.

I thought I'd add a look at the civilian labor force with the dynamic equilibrium model:



In this picture, we have just two major events over the last ~70 years in the macroeconomy: women entering the workforce and the Great Recession (where people left the workforce). This is the same general picture for inflation and output (see also here). Everything else is a fluctuation.

We'll get a new data point for this series tomorrow as well, so here's a zoomed-in version of the most recent data:

...

Update 5 May 2017

Here's that unemployment rate number. It's looking like the no-recession conditional forecast is the better one:


Tuesday, May 2, 2017

Mathiness in modern monetary theory


Simon Wren-Lewis sends us via Twitter to Medium for an exquisite example of my personal definition of mathiness: using math to obscure rather than enlighten.

Here's the article in a nutshell:
Any proposed government policy is challenged with the same question: “how are you going to pay for it”. 
The answer is: “by spending the money”.
Which may sound counter intuitive, but we can show how by using a bit of mathematics. 
[a series of mathematical definitions] 
And that is why you pay for government expenditure by spending the money [1]. The outlay will be matched by taxation and excess saving to the penny after n transactions. 
Expressing it using mathematics allows you to see what changing taxation rates attempts to do. It is trying to increase and decrease the magnitude of n — the number of transactions induced by the outlay. It has nothing to do with the monetary amount.
I emphasized a sentence that I will go back to in the end. But first let's delve into those mathematical definitions, shall we? And yes, almost every equation in the article is a definition. The first set of equations are definitions of initial conditions. The second is a definition of the relationship between $f$ and $T$ and $S$. The third set of equations define $T$. The fourth defines $S$. The fifth defines $r$. The sixth defines the domain of $f$, $T$, and $S$. Only the seventh isn't a definition. It's just a direct consequence of the previous six as we shall see.

The main equation defined is this:

$$
\text{(1) }\; f(t) \equiv f(0) - \sum_{i}^{t} \left( T_{i} + S_{i}\right)
$$

It's put up on the top of the blog post as if it's $S = k \log W$ on Boltzmann's grave. Already we've started some obfuscation because $f(0)$ is previously set to be $X$, but let's move on. What does this equation say? As yet, not much. For each $i < t$, we take a bite out of $f(0)$ that we arbitrarily separate into $T$ and $S$ which we call taxes and saving because those are things that exist in the real world and so their use may lend some weight to what is really just a definition that:

$$
K(t) \equiv M - N(t)
$$

In fact we can rearrange these terms and say:

$$
\begin{align}
f(t) \equiv & f(0) - \sum_{i}^{t} T_{i} -  \sum_{i}^{t} S_{i}\\
f(t) \equiv & M - T(t) -  S(t)\\
K(t) \equiv & M - N(t)
\end{align}
$$

As you can probably tell, this is about national income accounting identities. In fact, that is Simon Wren-Lewis's point. But let's push forward. The article defines $T$ in terms of a tax rate $0 \leq r < 1$ on $f(t-1)$. However, instead of defining $S$ analogously in terms of a savings rate $0 \leq s < 1$ on $f(t-1)$, the article obfuscates this as a "constraint"

$$
f(t-1) - T_{t} - S_{t} \geq 0
$$

Let's rewrite this with a bit more clarity using a savings rate, substituting the definition of $T$ in terms of a tax rate $r$:

$$
\begin{align}
f(t-1) - r_{t} f(t-1) - S_{t} & \geq 0\\
(1- r_{t}) f(t-1) - S_{t} & \geq 0\\
s_{t} (1- r_{t}) f(t-1) & \equiv S_{t} \; \text{given}\; 0 \leq s_{t} < 1
\end{align}
$$

Let's put both the re-definition of $T_{i}$ and this re-definition of $S_{i}$ in equation (1), where we can now solve the recursion and obtain

$$
f(t) \equiv f(0) \prod_{i}^{t} \left(1-r_{i} \right) \left(1-s_{i} \right)
$$

This equation isn't derived in the Medium article (and it really doesn't simplify the recursive equation without defining the savings rate). Note that both $s_{i}$ and $r_{i}$ are positive numbers less than 1. There's an additional definition that says that $r_{t}$ can't be zero for all times. Therefore the product of (one minus) those numbers is another number $0 < a_{i} < 1$ (my real analysis class did come in handy!) so what we really have is:

$$
\text{(2) }\; f(t) \equiv f(0) \prod_{i}^{t} a_{i}
$$

And as we all know, if you multiply a number by a number that is less than one, it gets smaller. If you do that a bunch of times, it gets smaller still.

In fact, that is the content of all of the mathematical definitions in the Medium post. You can call it the polite cheese theorem. If you put out a piece of cheese at a party, and if people take a non-zero fraction of it each half hour, those pieces will get smaller and smaller but eventually there is nothing left (i.e. somebody takes the last bit of cheese when it is small enough). Which is to say for $t \gg 1$ (with dimensionless time) $X \equiv T + S$ because $f(t) = 0$ with $t \gg 1$. 

But that's just an accounting identity and the article just obfuscated that fact by writing it in terms of a recursive function. Anyway, I wrote it all up in Mathematica in footnote [2]. 

Now back to that emphasized sentence above:
Expressing it using mathematics allows you to see what changing taxation rates attempts to do.
No. No it doesn't. If I write $Y = C + S + T$ per the accounting identities, then a change in $T$ by $\delta T$ means [3]

$$
\delta Y =  \left( \frac{\partial C}{\partial T}+ \frac{\partial S}{\partial T} + 1 \right) \delta T
$$

Does consumption rise or fall with increased taxation rates? Does saving rise or fall with increased taxation rate? Whatever the answer to those questions are, they are either models or empirical regularities. The math just helps you figure out the possibilities; it doesn't specify which occurs (for that you need data). The Medium article claims that all that changes is how fast $f(t)$ falls (i.e. the number of transactions before it reaches zero). However that's just the consequence of the assumptions leading to equation (2). And those assumptions represent assumptions about $\partial C/\partial T$ (and to a lesser extent $\partial S/\partial T$). Let's rearrange equation (3) and use $G = T + S$ [4]:

$$
\begin{align}
\delta Y = &  \frac{\partial C}{\partial T}\delta T + \frac{\partial S}{\partial T}\delta T  + \delta T \\
\delta Y = &  \frac{\partial C}{\partial T}\delta T + \frac{\partial G}{\partial T}\delta T \\
\delta Y = &  \frac{\partial C}{\partial T}\delta T + \delta G
\end{align}
$$

And there's where we see the obfuscation of original prior. In the medium article, $f(0) = X$ is first called the "initial government outlay". It's $\delta G$. However, later $f(t-1)$ is called "disposable income". That is to say it's $\delta Y - \delta T$. However those two statements are impossible to reconcile with the accounting identities unless $X$ is the initial net government outlay, meaning it is $\delta G - \delta T$. In that case we can reconcile the statements, but only if $\partial C/\partial T = 0$ because we've assumed 

$$
\begin{align}
\delta Y - \delta T & = \delta G - \delta T\\
\delta Y & = \delta G
\end{align}
$$

This was a long journey to essentially arrive at the prior behind MMT: government spending is private income, and government spending does not offset private consumption. It was obfuscated by several equations that I clipped out of the quote at the top of this post. And you can see how that prior leads right to the "counterintuitive" statement at the beginning of the quote:
Any proposed government policy is challenged with the same question: “how are you going to pay for it”. 
The answer is: “by spending the money”.
Which may sound counter intuitive, but we can show how by using a bit of mathematics.
No, you don't need the mathematics. If government spending is private income, then (assuming there is only a private and a public sector) private spending is government "income" (i.e. paying the government outlay back by private spending).

Now is this true? For me, it's hard to imagine that $\partial C/\partial T = 0$ or $\delta Y = \delta G$ exactly. The latter is probably a good approximation (effective theory) at the zero lower bound or for low inflation (it's a similar result to the IS-LM model). For small taxation changes, we can probably assume $\partial C/\partial T \approx 0$. Overall, I have no real problem with it. It's probably not a completely wrong collection of assumptions.

What I do have a problem with, however, is the unnecessary mathiness. I think it's there to cover up the founding principle of MMT that government spending is private income. Why? I don't know. Maybe they don't think people will accept that government spending is their income (which could easily be construed as saying we're all on welfare)? Noah Smith called MMT a kind of halfway house for Austrian school devotees, so maybe there's some residual shame about interventionism? Maybe MMT people don't really care about empirical data, and so there's just an effluence of theory? Maybe MMT people don't want to say they're making unfounded assumptions just like mainstream economists (or anyone, really) and so hide them "chameleon model"-style a la Paul Pfleiderer.

Whatever the reason (I like the last one), all the stock-flow analysis, complex accounting, and details of how the monetary system works serve mainly to obscure the primary point that government spending is private income for us as a society. It's really just a consequence of the fact that your spending is my income and vice versa. That understanding is used to motivate a case against austerity: government cutting spending is equivalent to cutting private income. From there, MMT people tell us austerity is bad and fiscal stimulus is good. This advice is not terribly different from what Keynesian economics says. And again, I have no real problem with it.

I'm sure I will get some comments that say I've completely misunderstood MMT and that it's really about something else. But please don't forget to tell us all what that "something else" is. But the statement here that "money is a tax credit" plus accounting really does say, basically, that government spending is our income.

But with all the definitions and equations, it ends up looking and feeling like this:


There seems to be a substitution of mathematics for understanding. In fact, the Medium article seems to think the derivation it goes through is necessary to derive its conclusion. But how can a series of definitions lead to anything that isn't itself effectively a definition?

Let me give you an analogy. Through a series of definitions (which I have done as an undergrad math major in that same real analysis course mentioned above), I can come to the statement

$$
\frac{df(x)}{dx} = 0
$$

implies $x$ optimizes $f(x)$ (minimum or maximum). There's a bunch of set theory (Dedekind cuts) and some other theorems that can be proven along the way (e.g. the mean value theorem). This really tells us nothing about the real world unless we make some connection to it however. For example, I could call $f(x)$ tax revenue and $x$ the tax rate ‒ and adding some other definitions ($f(x) > 0$ except $f(0) = f(1) = 0$) and say that the Laffer curve is something you can clearly see if you just express it in terms of mathematics.

The thing is that the Laffer curve is really just a consequence of those particular definitions. The question of whether or not it's a useful consequence of those definitions depends on comparing the "Laffer theory" to data.

Likewise, whether or not "private spending pays off government spending" is a useful consequence of the definitions in the Medium article critically depend on whether or not the MMT definitions used result in a good empirical description of a macroeconomy.

Without comparing models to data, physics would just be a bunch of mathematical philosophy. And without comparing macroeconomic models to data, economics is just a bunch of mathematical philosophy.

...

Update 5 May 2017:

Here's a graphical depiction of the different ways an identity $G = B + R$ can change depending on assumptions. These would be good pictures to use to try and figure out which one someone has in their head. For example, Neil has the top-right picture in his head. The crowding out picture is the bottom-right. You could call the picture on the bottom-left a "multiplier" picture.


Update 6 May 2017: Fixed the bottom left quadrant of the picture to match the top right quadrant.

...

Footnotes:


This is basically equivalent to what is done in the Medium article.

[2] Here you go:


[3] If someone dares to say something about discrete versus continuous variables I will smack you down with some algebraic topology [pdf].

[4] I think people who reason from accounting identities seem to make the same mistakes that undergrad physics students make when reasoning from thermodynamic potentials. Actually, in the information equilibrium ensemble picture this becomes a more explicit analogy.

The reason for the proliferation of macro models?

Noah Smith wrote something that caught my eye:
One thing I still notice about macro, including the papers Reis cites, is the continued proliferation of models. Almost every macro paper has a theory section. Because it takes more than one empirical paper to properly test a theory, this means that theories are being created in macro at a far greater rate than they can be tested.
This is fascinating, as it's completely unheard of in physics. Nearly every theory or model in a physics paper would either be one of four things:

  1. It's compared to some kind of data
  2. It's predicting a new effect that could be measured by new data
  3. It's included for pedagogical reasons
  4. It reduces to existing theories that have been tested

I'll use some of my own papers to demonstrate this:

https://arxiv.org/abs/nucl-th/0202016
The paper above is compared to data. The model fails, but that was the point: we wanted to show that a particular approach would fail.
https://arxiv.org/abs/nucl-th/0505048
The two papers above predict new effects that would be measured at Jefferson Lab.
https://arxiv.org/abs/nucl-th/0509033
The two papers above contain pedagogical examples and math. The first has five different models, but only one is compared to data. The second is more about the math.
Finally in my thesis linked above, I show how the "new" theory I was using connects to existing chiral perturbation theory and lattice QCD.
Of course, the immediate cry will be: What about string theory! But then string theory is about new physics at scales that can't currently be measured. Most string theory papers fall under 2, 3, or 4. Maybe if all these macroeconomic models were supposed to be about quantities we couldn't measure yet, then you might have a point about string theory.

Even Einstein's paper on general relativity showed how it could be tested, explaining existing data, or how they reduced to existing theories:

Reducing to Newton's law of gravity

New effect: bending of light rays by massive objects.

Explaining Mercury's perihelion rotation

I'm sure there are probably exceptions out there, but the rule is that if you come up with a theory you have to show how it connects/how it could connect to data, other existing theories, or you say you're just working out some math.

In any case, if you have a new model that can or should be tested with empirical data, the original paper should have the first test. Additionally, it should pass that first test ‒ otherwise, why publish? "Here's model that's wrong" is not exactly something that warrants publication in a peer reviewed journal except under particular circumstances [1]. And those circumstances are basically the circumstances that occur in my first paper listed above: you are trying to show a particular model approach will not work. In that paper I was showing that a relativistic mean-field effective theory approach in terms of hadrons cannot show the type of effect that was being observed (motivating the quark level picture I would later work on).

The situation Noah describes is just baffling to me. You supposedly had some data you were looking at that gave you the idea for the model, right? Or do people just posit "what-if" models in macroeconomics ... and then continue to consider them as .... um, plausible descriptions of how the world works ... um, without testing them???

...

Footnote:

[1] This is not the same thing as saying don't publish negative results. Negative empirical results are useful. We are talking about papers with theory in them. Ostensibly, the point of theory is to explain data. If it fails in it's one job, then why are we publishing it?

[2] When I looked it up for this blog post, it looks like another paper demonstrates a similar result (about the Hugenholtz-van Hove theorem [pdf]) but was published three months later (in the same journal) that I didn't know about:

https://arxiv.org/abs/nucl-th/0204008

Monday, May 1, 2017

Updated finance fortune-telling

Here's an update to the S&P 500 model forecast (see here for the previous update and the title reference):


And here's an update of the 10-year interest rate forecast:


Looking more like the 2016 election bump will "evaporate".

Core PCE inflation = -1.7%?

Core PCE has been updated for March 2017. I think I'm going to wait for the revised data before I update the forecasts:


Sunday, April 30, 2017

Can we see a Phillips curve?

The new core PCE inflation number for March comes out May 1st. In preparation for that, I was looking at the dynamic equilibrium model for PCE inflation and adding more shocks to see how well the data could fit. In the process, I noticed something odd/interesting:


These are all positive shocks to PCE inflation, but notice anything about the dates? Let me add NBER recessions on this picture:


Each recession is associated with a positive shock to PCE inflation that precedes it. The only exceptions are the early 2000s recession (for which there is a debate on whether or not it is a recession) and the early 1960s recession (where there isn't data). Actually, it is not entirely out of the question to add one for the early 2000s [1]. Since these shocks precede the recessions, they'll precede the shocks to unemployment (adding the dynamic equilibrium model of unemployment from e.g. here)


This reproduces a "Phillips curve"-like behavior. Inflation rises when unemployment has been falling for awhile after an unemployment shock. Just after a positive inflation shock, we get a shock to unemployment. Therefore inflation will tend to fall (since the shock is over) while unemployment is rising. These fluctuations are likely happening on top of the demographic transition of the 1960s and 70s.

If we are headed into another recession (per here), this might explain the higher inflation of the past year or so (core PCE inflation was over 3% in Jan of 2016 and 2017, having not been above 3% since 2012):


This is interesting as it means rising inflation is a sign of an upcoming recession (the center of the inflation shock precedes the center of the unemployment shock by about 1.3 years on average). However, this could be a just-so story. Inflation rises because unemployment gets low. But as recessions are random with roughly a mean time between them of about 8 years, it just appears we get recessions after unemployment has been falling for awhile (and we get a rise in inflation).

Update 1 May 2017

I had forgotten about the low CPI number earlier in April which should have prepared us for the very low March 2017 number: -1.7% (continuously compounded annual rate of change).

Footnotes:

[1] I don't necessarily think it's useful, but here it is:

Saturday, April 29, 2017

High dimensional budget constraints and economic growth


This is something of a partial idea, a work in progress. Let's say there is some factor of production $M$ allocated across $p$ different firms. The $p$-volume bounded by this budget constraint is:

$$
V = \frac{M^{p}}{p!}
$$

p-volume bounded by budget constraint M

Let's say total output $N$ is proportional to the volume $V$. Take the logarithm of the volume expression 

$$
\log V = p \log M - \log p!
$$

and use Stirling's approximation for a large number of firms:

$$
\log V = p \log M - p \log p + p
$$

If we assume $V \sim e^{\nu t}$ and $M \sim e^{\mu t}$ and taking the (logarithmic) derivative (continuously compounded rate of change) and re-arranging a bit:

$$
\nu = \left(  p+ \left( t - \frac{\log p}{\mu} \right) \frac{dp}{dt}   \right) \mu
$$

Now let's take $p \sim e^{\pi t}$ and re-arrange a bit more:

$$
\text{(1) }\; \nu = p \left( 1 + \left(1 - \frac{\pi}{\mu} \right) \pi t   \right) \mu
$$

In the information equilibrium model, for exponential growing functions with growth rates $a$ and $b$ we have the relationship (see e.g. here)

$$
a = k b
$$

where $k$ is the information transfer index. So in equation (1) we can identify the IT index

$$
k \equiv p \left( 1 + \left(1 - \frac{\pi}{\mu} \right) \pi t   \right)
$$

In a sense, we have shown one way how the information equilibrium condition with $k \neq 1$ can manifest itself. For short time scales $\pi t \ll 1$, we can say $p \approx p_{0} (1 + \pi t)$ and:

$$
k \approx p_{0} \left( 1 + \pi t + \left(1 - \frac{\pi}{\mu} \right) \pi t   \right)
$$

This an interesting expression. If $\pi > 2 \mu$ then the IT index falls. That is to say if the rate at which the number of firms increases faster than the factor of production increase, then the IT index falls. Is this the beginning of an explanation for falling growth with regard to secular stagnation? I'm not so sure. 

As I said, this is a work in progress.

Friday, April 28, 2017

Update to the predicted path of NGDP

The new GDP numbers are out today, and RGDP came in a bit low per the dynamic equilibrium. However the NGDP number is basically on track [1] with the prediction started over two years ago (most recently updated here):



I added orange dots and an orange dotted line to show the data available at the time. It looks like we can pretty well reject the "old normal" exponential growth model (gray dashed in both graphs). In the second graph, the model NGDP growth rate (blue line) appears biased high by 0.2 percentage points compared to the linear fit to the data (dotted yellow line).

There are still potential revisions (see the difference between the orange dotted and yellow curves), so 26 May 2017 we'll get the second estimate.

...

Footnotes:

[1] Meaning the deflator was high, which it was at 2.2%.

Thursday, April 27, 2017

What will the GDP number be tomorrow?

Menzie Chinn shows us the various estimates from GDPnow (Atlanta Fed), e-forecasting, and Macroeconomic Advisers

I thought I'd put a prediction out there using this model (which estimates RGDP per capita [prime age], so this includes an extrapolation from the model plus an estimate of the prime age population growth with the errors propagated from each but nearly all of the error is in the RGDP model by an order of magnitude). The result is (SAAR, chained 2009 dollars):

16,933.2 ± 63.9 billion dollars (1σ)

or

0.71 ± 0.38 % growth [1] ... i.e. 0.3% to 1.1%

Chinn tells us the Bloomberg consensus is 1.1%. Macroeconomic Advisers says 0.3%. GDPnow says 0.2%. The dynamic equilibrium model of RGDP per capita basically covers that entire spread. However, the dynamic equilibrium model has only two parameters (since were not in shock). That means that all the parameters of the GDPnow model or MA's model are getting you a just few tenths of a percentage point.

GDPnow seems to take into account the "low first quarter effect"; I wonder if MA does the same?

...

Update 28 April 2017:

The number is here and it is a bit lower than the model shows:

16,842.4
(+ 0.2 %)

which means Quarter/Quarter growth (that I show above) was 0.2% (and annualized is 0.7% which you might have seen in news reports e.g. here).

However, this is the advance estimate and there is a tendency for these to be revised (though it could be "low first quarter effect" mentioned above). So we'll see on 26 May 2017 what happens.

...

Footnotes:

[1] Quarter on quarter SAAR. Based on the not-yet-revised 16,813.3 billion number for Q4 2016.

Wednesday, April 26, 2017

Falsehoods scientists believe about economics

I stumbled upon a fun list of "falsehoods programmers believe about economics", and tweeted that I thought it also seems pretty representative of falsehoods scientists believe about economics. However, after I thought about it for a bit I realized that there are really two classes of scientists: scientists who believe those falsehoods and those that don't. 

Just kidding.

But really, there are just one of two falsehoods scientists believe about economics:
  1. Economics is a scientific field like any other that speaks in the public sphere using theories that are empirically grounded and responds to changes in the data, or ...
  2. Economics is a field rife with methodological issues and bad models that can be greatly improved by not only the approaches used in the scientist in question's own discipline, but in fact in the scientist in question's own work.
I used to be in category 1, but have since moved into category 2 [1].

...

Footnotes:

[1] That's a bit of a self-deprecating joke. Actually, the version (v2) of Fielitz and Borchardt's paper that I saw (which wasn't really relevant to my work at the time) came up during a search for something entirely different (which was relevant). I recalled the F&B paper (particularly the reference to "non-physical systems") while sitting through a presentation on prediction markets, and bit later Paul Krugman wrote this blog post ("It is not easy to derive supply and demand curves for an individual good from general equilibrium with rational consumers blah blah.") which I took as a challenge and started this blog.

The F&B paper was subsequently updated (v4) to include a reference to this blog.

Tuesday, April 25, 2017

Should the left engage with neoclassical economics?

Vox talked with Chris Hayes of MSNBC in one of their podcasts. One of the topics that was discussed was neoclassical economics:
[Vox:] The center-of-right ideas the left ought to engage[?] 
[Hayes:] The entirety of the corpus of Hayek, Friedman, and neoclassical economics. I think it’s an incredibly powerful intellectual tradition and a really important one to understand, these basic frameworks of neoclassical economics, the sort of ideas about market clearing prices, about the functioning of supply and demand, about thinking in marginal terms. 
I think the tradition of economic thinking has been really influential. I think it's actually a thing that people on the left really should do — take the time to understand all of that. There is a tremendous amount of incredible insight into some of the things we're talking about, like non-zero-sum settings, and the way in which human exchange can be generative in this sort of amazing way. Understanding how capitalism works has been really, really important for me, and has been something that I feel like I'm a better thinker and an analyst because of the time and reading I put into a lot of conservative authors on that topic.
I can hear some of you asking: Do I have to?

The answer is: No.

Why? Because you can get the same understanding while also understanding where these ideas fall apart ‒ that is to say understanding the limited scope of neoclassical economics – using information theory.

Prices and Hayek

One thing that I think needs to be more widely understood is that Hayek did have some insight into prices having something to do with information, but got the details wrong. He saw market prices aggregating information; a crop failure, a population boom, speculating on turning rice into ethanol ‒ these events would cause food prices to increase, and that price change represented knowledge about the state of the world being communicated. However, Hayek was writing in a time before communication theory (Hayek's The Use of Knowledge in Society was written in 1945, a few years before Shannon's A Mathematical Theory of Communication in 1948). The issue is evident in my list. The large amount of knowledge about biological or ecological systems, population, and social systems are all condensed into a single number that goes up. Can you imagine the number of variables you'd need to describe crop failures, population booms, and market bubbles? Thousands? Millions? How many variables of information do you get out via the price of rice the market? One.

What we have is a complex multidimensional space of possibilities that is being compressed into a single dimensional space of possibilities (i.e. prices), therefore if the price represents information aggregation, we are losing a great deal of it in the process. As I talk about in more detail here, one way neoclassical economics deals with this is to turn that multidimensional space into a single variable (utility), but that just means we've compressed all that information into something else (e.g. non-transitive or unstable preferences). 

However we can re-think the price mechanism's relationship with information. Stable prices mean a balance of crop failures and crop booms (supply), population declines and population booms (demand), speculation and risk-aversion (demand). The distribution of demand for rice is equal to the distribution of the supply of rice (see the pictures above: the transparent one is the "demand", the blue one is the "supply"). If prices change, the two distributions would have to have been unequal. If they come back to the original stable price ‒ or another stable price ‒ the two distributions must have become equal again. That is to say prices represent information about the differences (or changes) in the distributions. Coming back to a stable means information about the differences in one distribution must have flowed (through a communication channel) to the other distribution. We can call one distribution D and the other S for supply and demand. The price is then a function of changes in D and changes in S, or

p = f(ΔD, ΔS)

Note that we observe that an increase in S that's bigger than an increase in D generally leads to a falling price, while an increase in D that is bigger than the increase in S generally leads to a rising price. That means we can try

p = ΔD/ΔS

for our initial guess. Instead of a price aggregating information, we have a price detecting the flow of information. Constant prices tell us nothing. Price changes tell us information has flowed (or been lost) between one distribution and the other.

This picture also gets rid of the dimensionality problem: the distribution of demand can be as complex and multidimensional (i.e. depend on as many variables) as the distribution of supply.

Marginalism and supply and demand

Marginalism is far older than Friedman or Hayek, going back at least to Jevons and Marshall. In his 1892 thesis, Irving Fisher tried to argue that if you have gallons of one good A and bushels of another good B that were exchanged for each other then the last increment (the margin) was exchanged at the same rate as and B, i.e.

ΔA/ΔB = A/B

calling both sides of the equation the price of B in terms of A. Note that the left side is our price equation above, just in terms of A and B (you could call A the demand for B). In fact, we can get a bit more out of this equation if we say

pₐ = A/B

If you hold A = A₀ constant and change B, the price goes down. For fixed demand, increasing supply causes prices to fall – a demand curve. Likewise if you hold B = B₀ constant and change A, the price goes up – a supply curve. However if we take tiny increments of A and B and use a bit of calculus (ΔA/ΔB →dA/dB) the equation only allows A to be proportional to B. It's quite limited, and Fisher attempts to break out of this by introducing marginal utility. However, thinking in terms of information can again help us.

Matching distributions

If we think of our distribution of A and distribution of B (like the distribution of supply and demand above), each "draw" event from those distributions (like a draw of a card,a flip of a coin, or roll of a die) contains I₁ information (i.e. a flip of a coin contains 1 bit of information) for A and I₂ for B. If the distribution of A and B are in balance ("equilibrium"), each draw event from each distribution (a transaction event) will match in terms of information. Now it might cost two or three gallons of A for each bushel of B, so the numbers of of draws on either side will be different in general but as long as the number of draws is large the total information from those draws will be the same:

n₁ I₁ = n₂ I₂

We'll call I₁/I₂ = k for convenience so that

k n₁ = n₂

Now say the smallest amount of A is ΔA and likewise for B. That means

n₁ = A/ΔA
n₂ = B/ΔB

i.e. the number of gallons of A is the amount of A (i.e. A) divided by 1 gallon of A (i.e. ΔA). Putting this together and re-arranging a bit we have

ΔA/ΔB = k A/B

This is just Fisher's equation again except there's a coefficient in it, making the result a bit more interesting when you use tiny increments (ΔA/ΔB →dA/dB) and use calculus. But there's a more useful bit of understanding you get from this approach that you don't get from neoclassical economics. What we have is information flowing between A and B and we've assumed that information transfer is perfect. But markets aren't perfect, and all we can really say is that the most information that gets from the distribution of A to the distribution of B is all of the information in the distribution of A. Basically

n₁ I₁ ≥ n₂ I₂

Following this through the derivation above, we find

p = ΔA/ΔB ≤ k A/B

The real prices in a real economy will fall below the neoclassical prices. There's also another assumption in that derivation – that the number of transaction events is large. So even if the information transfer was ideal, neoclassical economics only applies in markets that are frequently traded. 

Another insight we get is that supply and demand doesn't always work in the simple way described in Marshall's diagrams. We had to make the assumption that A or B was relatively constant while the other changed. In many real world examples we can't make that assumption. A salient one today is (empirically incorrect) claim that immigration lowers wages. A naive application of supply and demand (increase supply of labor lowers the price of labor) ignores the fact that more people means more people to buy goods and services produced by labor. Thinking in terms of information, it is impossible to say that you've increased the number of labor supply events without increasing the number of labor demand events so A and B must both increase.

Instead of the neoclassical picture of ideal markets and simple supply and demand, we have the picture the left (and to be fair many economists) tries to convey of not only market failures and inefficiency but more complex interactions of supply and demand. However, it is also possible through collective action to mend or mitigate some of these failures. We shouldn't assume that just because a market spontaneously formed or produced a result it is working, and we shouldn't assume that because a price went up either demand went up or supply went down.

The market as an algorithm

The picture above is of a market as an algorithm matching distributions by raising and lowering a price until it reaches a stable price. In fact, this picture is of a specific machine learning algorithm called Generative Adversarial Networks (GAN, described in this Medium article or in the original paper). The idea of the market as an algorithm to solve a problem is not new. For example one of the best blog posts of all time uses linear programming as the algorithm, giving an argument for why planned economies will likely fail, but the same reasons imply we cannot check the optimality of the market allocation of resources (therefore claims of markets as optimal are entirely faith-based). The Medium article uses a good analogy that I will repeat here:


Instead of the complex multidimensional distributions we have paintings. The "supply" B is the forged painting, the demand A is the "real" painting. Instead of the random initial input, we have the complex, irrational, entrepreneurial, animal spirits of people. The detective is the price p. When the detective can't tell the difference between the paintings (i.e. when the price reaches a relatively stable value because the distributions are the same), we've reached our solution (a market equilibrium). 

Note that the problem the GAN algorithm tackles can be represented two-player minimax game from game theory. The thing is that with the wrong settings algorithms fail and you get garbage. I know from experience in my regular job researching machine learning, sparse reconstruction, and signal processing algorithms. So depending on the input data (i.e. human behavior), we shouldn't expect to get good results all of the time. These failures are exactly the failure of information to flow from the real painting to the forger through the detective – the failure for information from the demand to reach the supply via the price mechanism.

An interpretation of neoclassical economics for the left

The understanding of neoclassical economics provided by information theory and machine learning algorithms is better equipped to understand markets. Ideas that were posited as articles of faith or created through incomplete arguments by Hayek and Friedman are not the whole story and leave you with no knowledge of the ways the price mechanism, marginalism, or supply and demand can go wrong. In fact, leaving out the failure modes effectively declares many of the concerns of the left moot by fiat. The potential and actual failures of markets are a major concern of the left, and are frequently part of discussions of inequality and social justice.

The left doesn't need to follow Chris Hayes advice and engage with Hayek, Friedman, and the rest of neoclassical economics. The left instead needs to engage with a real world vision of economics that recognizes its potential failures. Understanding economics in terms of information flow is one way of doing just that.

...

Update 26 April 2017

I must add that the derivation of the information equilibrium condition (i.e. dA/dB = k A/B) is originally from a paper by Peter Fielitz and Guenter Borchardt and applied to physical systems. The paper is always linked in the side bar, but it doesn't appear on mobile devices.