## Friday, May 19, 2017

### Principal component analysis of state unemployment rates

One of my hobbies on this blog is to apply various principal component analyses (PCA) to economic data. For example, here's some jobs data by industry (more here). I am not saying this is original research (many economics papers have used PCA, but a quick Googling did not turn up this particular version).

Anyway, this is based on seasonally adjusted FRED data (e.g. here for WA) and I put the code up in the dynamic equilibrium repository. Here is all of the data along with the US unemployment rate (gray):

It's a basic Karhunen–Loève decomposition (Mathematica function here). Blue is the principal component (first principal component), and the rest of the components aren't as relevant. To a pretty good approximation, the business cycle in employment is a national phenomenon:

There's an overall normalization factor based on the fact that we have 50 states. We can see the first (blue) and second (yellow) components alongside the national unemployment rate (gray, right scale):

Basically the principal component is the national business cycle. The second component is interesting as it suggests differences in different states based on the two big recessions of the past 40 years (1980s and the Great Recession) that go in opposite directions. The best description of this component is that it represents that some states did much worse in the 1980s and some states did a bit better in the 2000s (see the first graph of this post).

As happened before, the principal component is pretty well modeled by the dynamic equilibrium model (just like the national data):

The transitions (recession centers) are at 1981.0, 1991.0, 2001.7, 2008.8 and a positive shock at 2014.2. These are consistent with the national data transitions (1981.1, 1991.1, 2001.7, 2008.8 and 2014.4).

## Wednesday, May 17, 2017

### My article at Evonomics

I have an article up at Evonomics about the basics of information equilibrium looking at it from the perspective of Hayek's price mechanism and the potential for market failure. Consider this post a forum for discussion or critiques. I earlier put up a post with further reading and some slides linked here.

I also made up a couple of diagrams that I didn't end up using illustrating price changes:

## Tuesday, May 16, 2017

### Explore more about information equilibrium

Originally formulated by physicists Peter Fielitz and Guenter Borchardt for natural complex systems, information equilibrium [arXiv:physics.gen-ph] is a potentially useful framework for understanding many economic phenomena. Here are some additional resources:

A tour of information equilibrium
Slide presentation (51 slides)

Dynamic equilibrium and information equilibrium
Slide presentation (19 slides)

Maximum entropy and information theory approaches to economics
Slide presentation (27 slides)

Information equilibrium as an economic principle
Pre-print/working paper (44 pages)

## Saturday, May 13, 2017

### Theory and evidence in science versus economics

Noah Smith has a fine post on theory and evidence in economics so I suggest you read it. It is very true that there should be a combined approach:
In other words, econ seems too focused on "theory vs. evidence" instead of using the two in conjunction. And when they do get used in conjunction, it's often in a tacked-on, pro-forma sort of way, without a real meaningful interplay between the two. ... I see very few economists explicitly calling for the kind of "combined approach" to modeling that exists in other sciences - i.e., using evidence to continuously restrict the set of usable models.

This does assume the same definition of theory in economics and science, though. However there is a massive difference between "theory" in economics and "theory" in sciences.

"Theory" in science

In science, "theory" generally speaking is the amalgamation of successful descriptions of empirical regularities in nature concisely packaged into a set of general principles that is sometimes called a framework. Theory for biology tends to stem from the theory of evolution which was empirically successful at explaining a large amount of the variation in species that had been documented by many people for decades. There is also the cell model. In geology you have plate tectonics that captures a lot of empirical evidence about earthquakes and volcanoes. Plate tectonics explains some of the fossil record as well (South America and Africa have some of the same fossils up to a point at which point they diverge because the continents split apart). In medicine, you have the germ theory of disease.

The quantum field theory framework is the most numerically precise amalgamation of empirical successes known to exist. But physics has been working with this kind of theory since the 1600s when Newton first came up with a concise set of principles that captured nearly all of the astronomical data about planets that had been recorded up to that point (along with Galileo's work on projectile motion).

But it is important to understand that the general usage of the word "theory" in the sciences is just shorthand for being consistent with past empirical successes. That's why string theory can be theory: it appears to be consistent with general relativity and quantum field theory and therefore can function as a kind of shorthand for the empirical successes of those theories ... at least in certain limits. This is not to say your new theoretical model will automatically be correct, but at least it doesn't obviously contradict Einstein's E = mc² or Newton's F = ma in the respective limits.

Theoretical biology (say, determining the effect of a change in habitat on a species) or theoretical geology (say, computing how the Earth's magnetic field changes) is similarly based on the empirical successes of biology and geology. These theories are then used to understand data and evidence and can be rejected if evidence contradicting them arises.

As an aside, experimental sciences (physics) have an advantage over observational ones (astronomy) in that the former can conduct experiments in order to extract the empirical regularities used to build theoretical frameworks. But even in experimental sciences, experiments might be harder to do in some fields than others. Everyone seems to consider physics the epitome of science, but in reality the only reason physics probably had a leg up in developing the first real scientific framework is that the necessary experiments required to observe the empirical regularities are incredibly easy to set up: a pendulum, some rocks, and some rolling balls and you're pretty much ready to experimentally confirm everything necessary to posit Newton's laws. In order to confirm the theory of evolution, you needed to collect species from around the world, breed some pigeons, and look at fossil evidence. That's a bit more of a chore than rolling a ball down a ramp.

"Theory" in economics

Theory in economics primarily appears to be solving utility maximization problems, but unlike science there does not appear to be any empirical regularity that is motivating that framework. Instead there are a couple of stylized facts that can be represented with the framework: marginalism and demand curves. However these stylized facts can also be represented with ... supply and demand curves. The question becomes what empirical regularity is described by utility maximization problems but not by supply and demand curves. Even the empirical work of Vernon Smith and John List can be described by supply and demand curves (in fact, at the link they can also be described by information equilibrium relationships).

Now there is nothing wrong with using utility maximization as a proposed framework. That is to say there's nothing wrong with positing any bit of mathematics as a potential framework for understanding and organizing empirical data. I've done as much with information equilibrium.

However the utility maximization "theory" in economics is not the same as "theory" in science. It isn't a shorthand for a bunch of empirical regularities that have been successfully described. It's just a proposed framework; it's mathematical philosophy.

The method of nascent science

This isn't necessarily bad, but it does mean that the interplay between theory and evidence reinforcing or refuting each other isn't the iterative process we need to be thinking about. I think a good analogy is an iterative algorithm. This algorithm produces a result that causes it to change some parameters or initial guess that is fed back into the same algorithm. This can converge to a final result if you start off close to it, but it requires your initial guess to be good. This is the case of science: the current state of knowledge is probably decent enough that the iterative process of theory and evidence will converge. You can think of this as the scientific method ... for established science.

For economics, it does not appear that the utility maximization framework is close enough to the "true theory" of economics for the method of established science to converge. What's needed is the scientific method that was used back when science first got its start. In a post from about a year ago, I called this the method of nascent science. That method was based around the different metric of usefulness rather than model rejection in established science. Here's a quote from that post:
Awhile ago, Noah Smith brought up the issue in economics that there are millions of theories and no way to reject them scientifically. And that's true! But I'm fairly sure we can reject most of them for being useless.

"Useless" is a much less rigorous and much broader category than "rejected". It also isn't necessarily a property of a single model on its own. If two independently useful models are completely different but are both consistent with the empirical data, then both models are useless. Because both models exist, they are useless. If one didn't [exist], the other would be useful.
Noah Smith (in the post linked at the beginning of this post) put forward three scenarios of theory and evidence in economics:
1. Some papers make structural models, observe that these models can fit (or sort-of fit) a couple of stylized facts, and call it a day. Economists who like these theories (based on intuition, plausibility, or the fact that their dissertation adviser made the model) then use them for policy predictions forever after, without ever checking them rigorously against empirical evidence.
2. Other papers do purely empirical work, using simple linear models. Economists then use these linear models to make policy predictions ("Minimum wages don't have significant disemployment effects").
3. A third group of papers do empirical work, observe the results, and then make one structural model per paper to "explain" the empirical result they just found. These models are generally never used or seen again.
Using these categories, we can immediately say 1 & 3 are useless. If a model never checked rigorously against data or if a model is never seen again, they can't possibly be useful.

In this case, the theories represent at best mathematical philosophy (as I mentioned at the end of the previous section). It's not really theory in the (established) scientific sense.

But!

 Mathematical Principles of Natural Philosophy

Sometimes a little bit of mathematical philosophy will have legs. Isaac Newton's work, when it was proposed, was mathematical philosophy. It says so right in the title. So there's nothing wrong with the proliferation of "theory" (by which we mean mathematical philosophy) in economics. But it shouldn't be treated as "theory" in the same sense of science. Most if it will turn out to be useless, which is fine if you don't take it seriously in the first place. And using economic "theory" for policy would be like using Descartes to build a mag-lev train ...

...

Update 15 May 2017: Nascent versus "soft" science

I made a couple of grammatical corrections and added a "does" and a "though" to the sentence after the first Noah Smith quote in my post above.

But I did also want to add the point that by "established science" vs "nascent science" I don't mean the same thing as many people mean when they say "hard science" vs "soft science". So-called "soft" sciences can be established or nascent. I think of economics as a nascent science (economies and many of the questions about them barely existed until modern nation states came into being). I also think that some portions will eventually become a "hard" science (e.g. questions about the dynamics of the unemployment rate), while others might become a "soft" science with the soft science pieces being consumed by sociology (e.g. questions about what makes a group of people panic or behave as they do in a financial crisis).

I wrote up a post that goes into that in more detail about a year ago. However, the main idea is that economics might be explicable -- as a hard science even -- in cases where the law of large numbers kicks in and agents do not highly correlate (where economics becomes more about the state space itself than the actions of agents in that state space ... Lee Smolin called this "statistical economics" in an analogy with statistical mechanics).

I think for example psychology is an established soft science. Its theoretical underpinnings are in medicine and neuroscience. That's what makes the replication crisis in psychology a pretty big problem for the field. In economics, it's actually less of a problem (the real problem is not the replication issue, but that we should all be taking the econ studies less seriously than we take psychology studies).

Exobiology or exogeology could be considered nascent hard sciences. Another nascent hard science might be so-called "data science": we don't quite know how to deal with the huge amounts of data that are only recently available to us and the traditional ways we treat data in science may not be optimal.

## Monday, May 8, 2017

### Government spending and receipts: a dynamic equilibrium?

I was messing around with FRED data and noticed that the ratio of government expenditures to government receipts seems to show a dynamic equilibrium that matches up with the unemployment rate. Note this is government spending and income at all levels (federal + state + local). So I ran it through the model [1] and sure enough it works out:

Basically, the ratio of expenditures to receipts goes up during a recession (i.e. deficits increase at a faster rate) and down in the dynamic equilibrium outside of recessions (i.e. deficits increase at a slower rate or even fall). The dates of the shocks to this dynamic equilibrium match pretty closely with the dates for the shocks to unemployment (arrows).

This isn't saying anything ground-breaking: recessions lower receipts and increase use of social services (so expenditures over receipts will go up). It is interesting however that the (relative) rate of improvement towards budget balance is fairly constant from the 1960s to the present date ... independent of major fiscal policy changes. You might think that all the disparate changes in state and local spending is washing out the big federal spending changes, but in fact the federal component is the larger component so it is dominating the graph above. In fact, the data looks almost the same with the just the federal component (see result below). So we can strengthen the conclusion: the (relative) rate of improvement towards federal budget balance is fairly constant from the 1960s to the present date ... independent of major federal fiscal policy changes.

...

Footnotes:

[1] The underlying information equilibrium model is GE ⇄ GR (expenditures are in information equilibrium with receipts, except during shocks).

## Friday, May 5, 2017

### Dynamic equilibrium in employment-population ratio in OECD countries

John Handley asks on Twitter about whether the dynamic equilibrium model works for the unemployment-population ratio for other countries besides the US. So I re-ran the model on some of the shorter OECD time series available on FRED (most of them were short, and I could easily automate the procedure for time series of approximately the same length).

As with the US, some countries seem to be undergoing a "demographic transition" with women entering the workforce. Therefore most of the data sets are for men only. I just realized that I actually have both for Greece. These are all for 15-64 year olds, and cases where there was data for at least 2000-2017. Some of the series only go back to 2004 or 2005, which is really too short to be conclusive. I also left off the longer time series (to come later in an update) because it was easy to automate the model for time series of approximately the same length.

Anyway, the men-only model country list is: Denmark, Estonia, Greece, Ireland, Iceland, Italy, New Zealand, Portugal, Slovenia, Turkey, and South Africa. The men and women are included for: France, Greece (again), Poland, and Sweden. I searched FRED manually, so these are just the countries that came up.

Here are the results (some have 1 shock, some have 2):

What is interesting is that while the global financial crisis seems to often be conflated with the Greek debt crisis, the Greek debt crisis appears to hit much later (centered at 2011.2). For example, the recession in Iceland is centered at 2008.7 (about 2.5 years earlier, closer to the recession center for the US).

...

Update:

Here are the results for Australia, Canada, and Japan which have longer time series:

### "You're wrong because I define it differently"

There is a problem in the econoblogosphere, especially among heterodox approaches, where practitioners do not recognize that their approach is non-standard. I'm not trying to single out commenter Peiya, but this comment thread is a teachable moment, and I thought my response had more general application.

Peiya started off saying:
Many economic theories are based on wrong interpretation on accounting identities and underlying data semantics.
and went on to talk about a term called "NonG". In a response to my question about the definitions of "NonG", Peiya responded:
Traditional definition of the "income accounting identity" (C+I+G = C + S + T or S-I = G-T) is widely-misused with implicit assumption NonG = 0.
So Peiya was using a different definition. My response is what I wanted to promote to a blog post (with one change to link to Paul Romer's blog post on Feynman integrity where I realize the direct quote uses the word "leaning" rather than "bending"):
For the purposes of this blog, we'll stick to the traditional definition unless there is e.g. a model of empirical data that warrants a change of definition. Changing definitions of accounting identities and saying "Many economic theories are based on wrong interpretation on accounting identities" is a bit disingenuous.
Imagine if I said you were wrong because I define accounting identities as statistical equilibrium potentials? I could say that there is no entropic force associated with your "nonG" term, therefore you have a wrong interpretation of the accounting identities.
But I don't say that. And you shouldn't say that about the "traditional" definition of accounting identities unless you have a really good reason backed up with some peer-reviewed research or at least open presentations of that research.
You must always try to "[bend] over backwards" to consider the fact that you might be wrong. Or at least note when you are considering some definition that is non-standard that it is in fact non-standard. In my link above, I admit the approach is speculative. I say "At least if [the equation presented] is a valid way to build an economy." I recognize that it is a non-standard definition of the accounting identities.
Saying people misunderstand a definition and then presenting a non-standard version of that definition is not maintaining the necessary integrity for intellectual discussion and progress.
I've encountered this many times where people basically assume their own approach is a kind of null hypothesis and other people are wrong because they didn't use their definitions of their model. Even economists with Phds sometimes do this. However "You're wrong because I define it differently" is not a valid argument, and it's even worse if you just say "You're wrong" leaving off the part about the definition because you assume everyone is using your definition for some reason. The only people who can assume other people are using their definition are mainstream economists because that's the only way science and academia operates. The mainstream consensus is the default, and not recognizing the mainstream consensus or mainstream definitions is failing to lean over backwards and show Feynman integrity

Commenter maiko followed up with something that is also a teachable moment:
maybe by nature he is just harsher on confused post keynesians and more compliant with asylum inmates.
By "he" maiko is referring to me, and by "asylum inmates", maiko is referring to mainstream economists (at least I think so).

And yes, that's exactly right. At least when it comes to definitions. There are thousands of books and thousands of education programs in the world teaching the mainstream approach to economics. Therefore mainstream economic definitions are the default. If you want to deviate from them, that's fine. However, because the mainstream definitions are the default you need to 1) say you are deviating from them, and 2) have a really good reason for doing so (preferably because it allows you to explain some empirical data).

Update:

In my Tweet of this post, I said that in order to have academic integrity, you must recognize the academic consensus. This has applications far beyond the econoblogosphere and basically sums up the problem with Charles Murray (failing to have academic integrity because he fails to recognize that the academic consensus is that his research is flawed) as well as Bret Stephens in the New York Times (in a twitter argument) who not only failed to recognize the scientific consensus but actually put false statements in his OpEd.

## Thursday, May 4, 2017

### Labor force dynamic equilibrium

Employment data comes out tomorrow and I have some forecasts that will be "marked to market" (here's the previous update). If the unemployment rate continues to fall, then we're probably not seeing the leading edge of a recession.

I thought I'd add a look at the civilian labor force with the dynamic equilibrium model:

In this picture, we have just two major events over the last ~70 years in the macroeconomy: women entering the workforce and the Great Recession (where people left the workforce). This is the same general picture for inflation and output (see also here). Everything else is a fluctuation.

We'll get a new data point for this series tomorrow as well, so here's a zoomed-in version of the most recent data:

...

Update 5 May 2017

Here's that unemployment rate number. It's looking like the no-recession conditional forecast is the better one:

## Tuesday, May 2, 2017

### Mathiness in modern monetary theory

Simon Wren-Lewis sends us via Twitter to Medium for an exquisite example of my personal definition of mathiness: using math to obscure rather than enlighten.

Here's the article in a nutshell:
Any proposed government policy is challenged with the same question: “how are you going to pay for it”.
The answer is: “by spending the money”.
Which may sound counter intuitive, but we can show how by using a bit of mathematics.
[a series of mathematical definitions]
And that is why you pay for government expenditure by spending the money [1]. The outlay will be matched by taxation and excess saving to the penny after n transactions.
Expressing it using mathematics allows you to see what changing taxation rates attempts to do. It is trying to increase and decrease the magnitude of n — the number of transactions induced by the outlay. It has nothing to do with the monetary amount.
I emphasized a sentence that I will go back to in the end. But first let's delve into those mathematical definitions, shall we? And yes, almost every equation in the article is a definition. The first set of equations are definitions of initial conditions. The second is a definition of the relationship between $f$ and $T$ and $S$. The third set of equations define $T$. The fourth defines $S$. The fifth defines $r$. The sixth defines the domain of $f$, $T$, and $S$. Only the seventh isn't a definition. It's just a direct consequence of the previous six as we shall see.

The main equation defined is this:

$$\text{(1) }\; f(t) \equiv f(0) - \sum_{i}^{t} \left( T_{i} + S_{i}\right)$$

It's put up on the top of the blog post as if it's $S = k \log W$ on Boltzmann's grave. Already we've started some obfuscation because $f(0)$ is previously set to be $X$, but let's move on. What does this equation say? As yet, not much. For each $i < t$, we take a bite out of $f(0)$ that we arbitrarily separate into $T$ and $S$ which we call taxes and saving because those are things that exist in the real world and so their use may lend some weight to what is really just a definition that:

$$K(t) \equiv M - N(t)$$

In fact we can rearrange these terms and say:

\begin{align} f(t) \equiv & f(0) - \sum_{i}^{t} T_{i} - \sum_{i}^{t} S_{i}\\ f(t) \equiv & M - T(t) - S(t)\\ K(t) \equiv & M - N(t) \end{align}

As you can probably tell, this is about national income accounting identities. In fact, that is Simon Wren-Lewis's point. But let's push forward. The article defines $T$ in terms of a tax rate $0 \leq r < 1$ on $f(t-1)$. However, instead of defining $S$ analogously in terms of a savings rate $0 \leq s < 1$ on $f(t-1)$, the article obfuscates this as a "constraint"

$$f(t-1) - T_{t} - S_{t} \geq 0$$

Let's rewrite this with a bit more clarity using a savings rate, substituting the definition of $T$ in terms of a tax rate $r$:

\begin{align} f(t-1) - r_{t} f(t-1) - S_{t} & \geq 0\\ (1- r_{t}) f(t-1) - S_{t} & \geq 0\\ s_{t} (1- r_{t}) f(t-1) & \equiv S_{t} \; \text{given}\; 0 \leq s_{t} < 1 \end{align}

Let's put both the re-definition of $T_{i}$ and this re-definition of $S_{i}$ in equation (1), where we can now solve the recursion and obtain

$$f(t) \equiv f(0) \prod_{i}^{t} \left(1-r_{i} \right) \left(1-s_{i} \right)$$

This equation isn't derived in the Medium article (and it really doesn't simplify the recursive equation without defining the savings rate). Note that both $s_{i}$ and $r_{i}$ are positive numbers less than 1. There's an additional definition that says that $r_{t}$ can't be zero for all times. Therefore the product of (one minus) those numbers is another number $0 < a_{i} < 1$ (my real analysis class did come in handy!) so what we really have is:

$$\text{(2) }\; f(t) \equiv f(0) \prod_{i}^{t} a_{i}$$

And as we all know, if you multiply a number by a number that is less than one, it gets smaller. If you do that a bunch of times, it gets smaller still.

In fact, that is the content of all of the mathematical definitions in the Medium post. You can call it the polite cheese theorem. If you put out a piece of cheese at a party, and if people take a non-zero fraction of it each half hour, those pieces will get smaller and smaller but eventually there is nothing left (i.e. somebody takes the last bit of cheese when it is small enough). Which is to say for $t \gg 1$ (with dimensionless time) $X \equiv T + S$ because $f(t) = 0$ with $t \gg 1$.

But that's just an accounting identity and the article just obfuscated that fact by writing it in terms of a recursive function. Anyway, I wrote it all up in Mathematica in footnote [2].

Now back to that emphasized sentence above:
Expressing it using mathematics allows you to see what changing taxation rates attempts to do.
No. No it doesn't. If I write $Y = C + S + T$ per the accounting identities, then a change in $T$ by $\delta T$ means [3]

$$\delta Y = \left( \frac{\partial C}{\partial T}+ \frac{\partial S}{\partial T} + 1 \right) \delta T$$

Does consumption rise or fall with increased taxation rates? Does saving rise or fall with increased taxation rate? Whatever the answer to those questions are, they are either models or empirical regularities. The math just helps you figure out the possibilities; it doesn't specify which occurs (for that you need data). The Medium article claims that all that changes is how fast $f(t)$ falls (i.e. the number of transactions before it reaches zero). However that's just the consequence of the assumptions leading to equation (2). And those assumptions represent assumptions about $\partial C/\partial T$ (and to a lesser extent $\partial S/\partial T$). Let's rearrange equation (3) and use $G = T + S$ [4]:

\begin{align} \delta Y = & \frac{\partial C}{\partial T}\delta T + \frac{\partial S}{\partial T}\delta T + \delta T \\ \delta Y = & \frac{\partial C}{\partial T}\delta T + \frac{\partial G}{\partial T}\delta T \\ \delta Y = & \frac{\partial C}{\partial T}\delta T + \delta G \end{align}

And there's where we see the obfuscation of original prior. In the medium article, $f(0) = X$ is first called the "initial government outlay". It's $\delta G$. However, later $f(t-1)$ is called "disposable income". That is to say it's $\delta Y - \delta T$. However those two statements are impossible to reconcile with the accounting identities unless $X$ is the initial net government outlay, meaning it is $\delta G - \delta T$. In that case we can reconcile the statements, but only if $\partial C/\partial T = 0$ because we've assumed

\begin{align} \delta Y - \delta T & = \delta G - \delta T\\ \delta Y & = \delta G \end{align}

This was a long journey to essentially arrive at the prior behind MMT: government spending is private income, and government spending does not offset private consumption. It was obfuscated by several equations that I clipped out of the quote at the top of this post. And you can see how that prior leads right to the "counterintuitive" statement at the beginning of the quote:
Any proposed government policy is challenged with the same question: “how are you going to pay for it”.
The answer is: “by spending the money”.
Which may sound counter intuitive, but we can show how by using a bit of mathematics.
No, you don't need the mathematics. If government spending is private income, then (assuming there is only a private and a public sector) private spending is government "income" (i.e. paying the government outlay back by private spending).

Now is this true? For me, it's hard to imagine that $\partial C/\partial T = 0$ or $\delta Y = \delta G$ exactly. The latter is probably a good approximation (effective theory) at the zero lower bound or for low inflation (it's a similar result to the IS-LM model). For small taxation changes, we can probably assume $\partial C/\partial T \approx 0$. Overall, I have no real problem with it. It's probably not a completely wrong collection of assumptions.

What I do have a problem with, however, is the unnecessary mathiness. I think it's there to cover up the founding principle of MMT that government spending is private income. Why? I don't know. Maybe they don't think people will accept that government spending is their income (which could easily be construed as saying we're all on welfare)? Noah Smith called MMT a kind of halfway house for Austrian school devotees, so maybe there's some residual shame about interventionism? Maybe MMT people don't really care about empirical data, and so there's just an effluence of theory? Maybe MMT people don't want to say they're making unfounded assumptions just like mainstream economists (or anyone, really) and so hide them "chameleon model"-style a la Paul Pfleiderer.

Whatever the reason (I like the last one), all the stock-flow analysis, complex accounting, and details of how the monetary system works serve mainly to obscure the primary point that government spending is private income for us as a society. It's really just a consequence of the fact that your spending is my income and vice versa. That understanding is used to motivate a case against austerity: government cutting spending is equivalent to cutting private income. From there, MMT people tell us austerity is bad and fiscal stimulus is good. This advice is not terribly different from what Keynesian economics says. And again, I have no real problem with it.

I'm sure I will get some comments that say I've completely misunderstood MMT and that it's really about something else. But please don't forget to tell us all what that "something else" is. But the statement here that "money is a tax credit" plus accounting really does say, basically, that government spending is our income.

But with all the definitions and equations, it ends up looking and feeling like this:

There seems to be a substitution of mathematics for understanding. In fact, the Medium article seems to think the derivation it goes through is necessary to derive its conclusion. But how can a series of definitions lead to anything that isn't itself effectively a definition?

Let me give you an analogy. Through a series of definitions (which I have done as an undergrad math major in that same real analysis course mentioned above), I can come to the statement

$$\frac{df(x)}{dx} = 0$$

implies $x$ optimizes $f(x)$ (minimum or maximum). There's a bunch of set theory (Dedekind cuts) and some other theorems that can be proven along the way (e.g. the mean value theorem). This really tells us nothing about the real world unless we make some connection to it however. For example, I could call $f(x)$ tax revenue and $x$ the tax rate ‒ and adding some other definitions ($f(x) > 0$ except $f(0) = f(1) = 0$) and say that the Laffer curve is something you can clearly see if you just express it in terms of mathematics.

The thing is that the Laffer curve is really just a consequence of those particular definitions. The question of whether or not it's a useful consequence of those definitions depends on comparing the "Laffer theory" to data.

Likewise, whether or not "private spending pays off government spending" is a useful consequence of the definitions in the Medium article critically depend on whether or not the MMT definitions used result in a good empirical description of a macroeconomy.

Without comparing models to data, physics would just be a bunch of mathematical philosophy. And without comparing macroeconomic models to data, economics is just a bunch of mathematical philosophy.

...

Update 5 May 2017:

Here's a graphical depiction of the different ways an identity $G = B + R$ can change depending on assumptions. These would be good pictures to use to try and figure out which one someone has in their head. For example, Neil has the top-right picture in his head. The crowding out picture is the bottom-right. You could call the picture on the bottom-left a "multiplier" picture.

Update 6 May 2017: Fixed the bottom left quadrant of the picture to match the top right quadrant.

...

Footnotes:

This is basically equivalent to what is done in the Medium article.

[2] Here you go:

[3] If someone dares to say something about discrete versus continuous variables I will smack you down with some algebraic topology [pdf].

[4] I think people who reason from accounting identities seem to make the same mistakes that undergrad physics students make when reasoning from thermodynamic potentials. Actually, in the information equilibrium ensemble picture this becomes a more explicit analogy.

### The reason for the proliferation of macro models?

Noah Smith wrote something that caught my eye:
One thing I still notice about macro, including the papers Reis cites, is the continued proliferation of models. Almost every macro paper has a theory section. Because it takes more than one empirical paper to properly test a theory, this means that theories are being created in macro at a far greater rate than they can be tested.
This is fascinating, as it's completely unheard of in physics. Nearly every theory or model in a physics paper would either be one of four things:

1. It's compared to some kind of data
2. It's predicting a new effect that could be measured by new data
3. It's included for pedagogical reasons
4. It reduces to existing theories that have been tested

I'll use some of my own papers to demonstrate this:

https://arxiv.org/abs/nucl-th/0202016
The paper above is compared to data. The model fails, but that was the point: we wanted to show that a particular approach would fail.
https://arxiv.org/abs/nucl-th/0505048
The two papers above predict new effects that would be measured at Jefferson Lab.
https://arxiv.org/abs/nucl-th/0509033
The two papers above contain pedagogical examples and math. The first has five different models, but only one is compared to data. The second is more about the math.
Finally in my thesis linked above, I show how the "new" theory I was using connects to existing chiral perturbation theory and lattice QCD.
Of course, the immediate cry will be: What about string theory! But then string theory is about new physics at scales that can't currently be measured. Most string theory papers fall under 2, 3, or 4. Maybe if all these macroeconomic models were supposed to be about quantities we couldn't measure yet, then you might have a point about string theory.

Even Einstein's paper on general relativity showed how it could be tested, explaining existing data, or how they reduced to existing theories:

 Reducing to Newton's law of gravity

 New effect: bending of light rays by massive objects.

 Explaining Mercury's perihelion rotation

I'm sure there are probably exceptions out there, but the rule is that if you come up with a theory you have to show how it connects/how it could connect to data, other existing theories, or you say you're just working out some math.

In any case, if you have a new model that can or should be tested with empirical data, the original paper should have the first test. Additionally, it should pass that first test ‒ otherwise, why publish? "Here's model that's wrong" is not exactly something that warrants publication in a peer reviewed journal except under particular circumstances [1]. And those circumstances are basically the circumstances that occur in my first paper listed above: you are trying to show a particular model approach will not work. In that paper I was showing that a relativistic mean-field effective theory approach in terms of hadrons cannot show the type of effect that was being observed (motivating the quark level picture I would later work on).

The situation Noah describes is just baffling to me. You supposedly had some data you were looking at that gave you the idea for the model, right? Or do people just posit "what-if" models in macroeconomics ... and then continue to consider them as .... um, plausible descriptions of how the world works ... um, without testing them???

...

Footnote:

[1] This is not the same thing as saying don't publish negative results. Negative empirical results are useful. We are talking about papers with theory in them. Ostensibly, the point of theory is to explain data. If it fails in it's one job, then why are we publishing it?

[2] When I looked it up for this blog post, it looks like another paper demonstrates a similar result (about the Hugenholtz-van Hove theorem [pdf]) but was published three months later (in the same journal) that I didn't know about:

https://arxiv.org/abs/nucl-th/0204008