Sunday, May 29, 2016

Falsifiable statements are not philosophical disagreements

I already mentioned this on Twitter:
But I thought I'd get a few more details in on the blog. I'm always a bit confused by arguments that are basically: We haven't figured it out, therefore we'll never figure it out. I've mentioned this elsewhere [1]. Just imagine someone arguing this about the nature of the Sun in the 1500s. Manu Saadia (whose new book Trekonomics is coming out in a couple days, and I plan to get it based on what I've seen him saying about it so far) got in an argument with Noah Smith about whether sociology should be scientific. Saadia fell into this line of reasoning:

Saadia says: "I don't believe social objects function and can be observed like physical objects."

This is a falsifiable hypothesis (the evidence would be the existence of successful theoretical and empirical sociological research) -- therefore it is not a "philosophical disagreement". Disagreeing about how the sun works is not a philosophical disagreement -- there exist various models that have varying degrees of empirical accuracy. One needs evidence that social objects have the properties Saadia endows them with by fiat. Saying it is philosophy seems like an argument that he doesn't think he needs evidence.

I call this the fallacy of arguing from failure of imagination; just because you can't think of a way social objects function or can be observed like physical objects doesn't mean it doesn't exist.

Now I guess it is fine to set Saadia's hypothesis as the null hypothesis, and you could say my argument is an attempt to capture the null hypothesis myself. But that's just a mathematical convention in statistical tests -- which hypothesis gets to be the null isn't specified by the mathematical theory.

H0 = Social science exists
H0 = Social science doesn't exist

If you switch back and forth between the two, you'd probably find there isn't enough evidence to reject either [2].



[1] I'm actually pretty proud of the whole series of posts that come up when you search for "complexity" on my blog.

[2] I'm being charitable. There is plenty of evidence that some aspects of social science are falsifiable, measurable and empirically testable. I'm just too lazy to look up some examples right now. However, the blog that you're reading right now is at least one existence proof.

Friday, May 27, 2016


As a side note to this post, I'd like to point out that microeconomic rationality assumptions have no macroeconomic implications. It's one of the takeaways from the SMD theorem. Therefore a complaint that economics has perfectly rational individuals is really irrelevant to macroeconomic policy.

So if you are ever talking about policy (like here), complaints about the assumption of rationality in traditional economics are kind of silly.

I'm under the impression that you can make all kinds assumptions about individuals and end up with the same macroeconomy (for one thing, that solves the problem of tractable/simplified agent based models mapping to conclusions about real economies).

New economics: now with all new unfounded assumptions!

Not sure the traditional economics is charitably represented here, but I for one don't see the difference between assuming column A and assuming column B. If I had my druthers, we'd go with column C. Chart modified from Eric Beinhocker's chart reproduced at Evonomics.

One of the more maddening things about the public discourse on economics is that it seems a large fraction of it is based on simply exchanging one set of theoretical assumptions for another [1]. The force multiplier of annoyance is the fact that these theoretical assumptions assume things you should be using your theory to understand. If I was to write this as symbolic logic, we'd have:


The above table is reproduced (and greatly enhanced by the addition of a fifth third column) from this article at Evonomics, but the problem runs the gamut from assuming complexity and nonlinearity (here, here, here or here) to assuming things about monetary policy (e.g. here) to assuming what a recession is (e.g. here).

Really the assumptions in the table above, or about complexity, or about recessions are models that should be constructed from more fundamental assumptions. Those fundamental assumptions should follow the admittedly subjective rule of being arguably self-evident. For one thing, they should not include assumptions about human behavior!

An example from physics might be illuminating. What are the underlying assumptions of classical physics? That continuous motion exists and it doesn't matter where it happens in space. In more technical language, we assume calculus and space translation invariance (aka conservation of momentum). Newton expressed these in terms of the time derivative of momentum, but the basic story is calculus plus momentum. Newton did not assume what matter is made of, how the solar system worked, or Kepler's laws.

Can we construct a comparable assumption for economics? In the sense that physics is the study of motion, economics is the study of exchange and prices. And prices would be irrelevant if everything that was wanted ("supply") was in the possession (i.e. in the same location in time and space) of the wanting entity ("demand") and things never changed. Therefore we can assume (at a minimum) that when the instances of supply and instances of demand match, we have a scale invariance (a subset of conformal symmetry, see here or here) between supply and demand that leaves prices invariant. Additionally, we can assume prices should only change if supply and/or demand change. 

The information transfer approach takes the statement "instances of supply and instances of demand match" and makes it probabilistic (probability distributions of supply events and demand events match and we have a large number of events), allowing us to talk about the information entropy of those distributions and deriving an equation (here or in my paper) that holds in general that is consistent with that conformal symmetry:

p \equiv \frac{dD}{dS} \leq k \; \frac{D}{S}

Analogous to the way Newton defined force (a concept people vaguely intuited), we're defining price (a concept in my opinion people have only vaguely intuited) as the LHS of the equation. We call equality "information equilibrium".

Now it is possible to make other assumptions about economics. Another strong contender is that economics is the study of strategic decisions over a set of options made by people who have one kind of stuff and people who have another kind of stuff. That would lead to game theory, but the "strategic decisions" component does make some assumptions about human behavior (i.e. that we make strategic decisions).

The information transfer approach may not be correct. Economics may not really be about matching supply with demand. However, it at least makes the kind of assumptions you should be making [2] -- and doesn't assume the answers to questions about the impacts of policy, institutions, individual behavior or recessions that should be the focus of economics.



[1] This just makes me think of this quote from Ghostbusters II:
Dana: Okay, but after dinner, don't put any of those old cheap moves on me. It's different now.Peter Venkman: Oh, no! I have all NEW cheap moves.
[2] This aspect is closely related to the fact that you can understand the different "schools" of economics in terms of extra assumptions beyond the information equilibrium assumptions.

Monday, May 23, 2016

Modeling in physics versus modeling in economics

Paul Pfleiderer has some slides on what he called "chameleon models" [pdf] that are in general very good. As always when economists make physics analogies, he makes some mistakes. As I said on Twitter the other day, economists should stick to Newtonian physics when making analogies. There's really nothing in quantum mechanics that is relevant to economics that isn't also present in classical mechanics. When Pfleiderer says "We do observe something about the paths [electrons and photons] take", he should realize that although reality is a lot weirder, a good starting point is that quantum effects result from not knowing which path electrons and photons take.

Aside from that, this pair of slides was interesting (I added the question mark):

Now one pair of statements is a great illustration of the difference between economic (well, finance) methodology and physics methodology for a couple of reasons. It is:
  • Physics: Models do not contradict other things we know about electrons and photons.
  • Economics (finance): Models often contradict things we know about actors 
On one level, this illustrates an issue with economics. On another level, however, physics models often contradict things we know about electrons and photons. An example is right there in Pfleiderer's chart! It shows both a Dirac equation (relativistic, spin-1/2) and a Schrodinger equation (non-relativistic spin-independent). However, physics models will occasionally use the Schrodinger equation to describe electrons, contradicting the fact that we know electrons are spin-1/2 and Einstein was right. The problem is that economic models lack scope conditions that tell us when the contradictions matter. In physics, we can make assumptions about the importance of spin (frequently it is only important in counting degrees of freedom) and relativity (kinetic energy of the electron E << m, velocity v << c the speed of light). This means the theory contradicts what we know, but it also sets scope conditions so that we know the contradictions don't matter.

There's another pair that illustrates a difference on multiple levels:
  • Physics: We don’t observe anything about what “motivates” electrons and photons to make decisions
  • Economics: We observe a lot about decision makers making financial decisions. We can even ask them what they are doing (or at least what they think they are doing)
Well, we don't directly observe the wave function -- it's a mathematical construct that reproduces empirical results. It's not entirely off-base (although it is completely unnecessary) to say that the wave function is what "motivates" electrons to create an interference pattern (there is an interpretation of quantum mechanics where this happens).

However one the major differences is that physicists don't assume e.g. electrons have any properties besides the ones assigned in the model. The electron in quantum field theory (the best we know) is an excitation a spin-1/2 field representation of the Poincare symmetry group with a collection of charges -- a U(1) charge, an SU(2) charge and a SU(3) charge (zero). Full stop. Electrons have zero other properties and anything with those properties is an electron. Economists sometimes call people rational utility maximizers, but are aware that some humans have birthday parties or don't optimize the ultimatum game.

That means any economic theory is always an approximation with some limited scope. The real trouble with chameleon models is that they don't identify their scope -- you take an idealized model with limited scope and say that it has policy implications for the real world (extensive scope).

But I'd like to focus on this statement:
We observe a lot about decision makers making financial decisions. We can even ask them what they are doing (or at least what they think they are doing)
I do like the addition of the parenthetical, but there are two major issues here: post hoc rationalization and complexity. The former just means that asking humans about their decisions isn't always a reliable source of data. We tend to fit events into a story about how and why things work out the way they do. By the way, this is one of my problems with "stories" in economics. This is exactly the kind of thing you don't want to be doing if you're trying to do science because it is exactly one of the ways humans fool themselves [1].

The complexity is more difficult to deal with. We can try to ask why someone does what they do, but we may not know the relevant set of questions. A trader might not have executed a trade because they had an emergency where they had to pick up their child at school (just a potential example) or went drinking the night before and were a little foggy the next day. This gets even more complex when you are dealing with human decision-making in the field. I didn't buy champagne at Store A one day because I can get it cheaper at Store B. But another day, I bought it at Store A because I had a long day at work and didn't want to make two stops on the way home. Did these hypothetical researchers on price elasticity of champagne have the wherewithal to ask about how my work day went? Did I even know that's why I did it? Maybe I thought traffic was bad, but really it was my long day at work that lead to my interpretation that traffic was bad (I had less patience) -- even though traffic was normal. Or maybe traffic is bad, but that was the reason I decided to stay later at work making my day long.

Economists recognize this as omitted variable bias, but really we have two sources of omitted variable bias: the economists and the human subjects. The economists may not think to ask X and the human may not think X is important. And causal loop diagrams can sometimes get pretty complex!

This is why I think we should plead ignorance about why people do things and use random decisions as a start. If random decisions turn out to be a good description, then we know the state space (opportunity set of possible decisions) is more important than the agent state space occupations (agent decisions).

I've talked about this before; it is possible (even likely) that random agents might not work in some (most?) cases. My opinion is that when agent decisions become more important than the state space of available decisions you're really studying psychology, not economics.



[1] Pfleiderer includes the quote from Feynman at the end:
Science is what we have learned about how to keep from fooling ourselves.

Saturday, May 21, 2016

Monetarism's epistemological nightmare

As I mentioned yesterday, I was reading over George Selgin's series on monetary policy. The post on money demand is an epistemological nightmare. Let's see if I get this straight ...
(Somewhat) Shorter Selgin: There's supply and demand for money, but there's no real way to measure the supply for money. That's okay; we still can use the concept of demand for money (which is different from wanting money). Let's assume monetarist economics -- define economic growth as money demand growing slower than the (unmeasurable) money supply, and economic contraction as as money demand growing slower than the (unmeasurable) money supply. Now monetary policy can mitigate that shortage or surplus of (again, unmeasurable) money by "an appropriate change the available number  of dollars of different sorts" (where the meanings of "appropriate", "available number", and "different sorts" are unknown, unmeasurable and undefined, respectively). But also, even though I started off this explanation with demand for money, we're really talking about the demand for purchasing power which involves the demand for money and the prices of goods. And if prices were flexible, then changing the (unmeasurable) supply of money would just lead to changes in prices of goods.
One thing I think is useful about the information equilibrium model is that you can make it clear when someone is trying to pull a fast one with a couple symbols. Selgin's model is two information equilibrium relationships:
  • NGDP ⇄ ?? if prices are sticky, and
  • CPI ⇄ ?? if prices are flexible
Where ?? is the (unmeasurable) "available number of dollars of different sorts". But even the equation of exchange is NGDP = ?? × V??.

Let's look at this crazy passage:
Finally, the fact that an increased demand for money manifests itself in peoples' refraining from spending the stuff, while a decline in the demand for money translates into increased spending, means that one can get a handle on whether an economy has too much, too little, or just enough money without having to decide just what "money" consists of. One need only keep track of overall spending or aggregate demand. Whenever overall spending goes up, that's a sign that the supply of money is growing faster than the demand for it. When it shrinks, it's a sign that demand for money is growing relative to the available supply.
We don't need to decide what money is because we just assume how macroeconomics works. Just trust us. And whatever money is, there's also a counterfactual path of the economy where overall spending grows at some undefined lower rate that tells us what the counterfactual path of undefined money is where the supply and demand for it grow at the same rate.

I think there is no way around it at this point. Monetarism is a degenerative research program. It started with gold, then on to M1, and when that didn't work, M2. Or M3. Or Mn+1.Or maybe velocity changes. 

The massive increase in the monetary base after QE in the US basically de-coupled monetarism from money at all. Selgin tells us it doesn't matter, and in various other places it's been completely replaced by expectations of ... what? ... monetary expansion? M2 expansion? Base expansion? Ah, but people could expect the base expansion could be taken away.

If money enters the market and no one expects it, does it raise prices?

Friday, May 20, 2016

The scales of sticky wages

I was reading over George Selgin's series on monetary policy (to look for contrasts with the information equilibrium view), and he put up a link to this paper on downward nominal wage rigidity.
After correcting for measurement error, wages appear to be very sticky. In the average quarter, the probability that an individual will experience a nominal wage change is between 5 and 18 percent, depending on the samples and assumptions used.
Sounds like wages don't change, right? So what is the likelihood of an individual having at least one wage change during a year (4 quarters)? Those of you familiar with the birthday problem already have the answer. What you do is ask what is the complement probability of not having a wage change in Q1 and not having a wage change in Q2 and not having a wage change in Q3 and not having a wage change in Q4 ...

1 - (1 - 0.18)4 ≈ 0.55

So while the probability of a wage change is only 5 to 18% in a single quarter, the probability of of a wage change during the year is between 19 and 55%. That doesn't seem as sticky. Let's check out the distribution of wage changes from the paper:

This graph strongly suggests downward nominal wage rigidity (DNWR) -- the probability of a wage cut is low compared to a wage increase. Note this is from 1996, a year of strong economic growth in the US, so DNWR will likely be more pronounced. Part of this is an optical illusion because they excise the zero wage change bin and the distribution has a positive mean. Let's draw in a Cauchy distribution (which looks like it fits well) and check out exactly how much of a perturbation downward nominal wage rigidity is:

The spike above the Cauchy distribution is another anchoring effect -- the cost of living/inflation adjustment (this is a model assumption). We can fit the discrepancy:

And subtract it:

And here is the full distribution -- including an estimate of the zero wage change bin:

We have three effects: COL anchoring, zero-change anchoring, and downward nominal rigidity (DNWR). Together the zero effect and COL anchoring represent about a 10-20% effect meaning the DNWR is about a 20% effect on its own -- which is sizeable. With the exception of the COL anchoring, I'd say these are all "first order" effects.

The question is: over what time scale? As I talked about above, the probability of a wage change within a year is about 55%. In fact, if you iterate the distribution over several quarters, these effects disappear:

Note that I show the wage change relative to the mean and that I cut off the (pathological) Cauchy distribution at ±1 (wage changes don't really go to infinity). I don't show the COL and zero effects, but they disappear as well. How do I know this? The central limit theorem. Generally, any distribution with a finite mean and variance will approach a normal distribution as you sum up events (in e.g. a random walk).

We can see that the DNWR disappears after a few quarters. Therefore we might consider DNWR to impact macroeconomics over a quarter or two, after a year the impact should have vanished -- ie. wages are microeconomically flexible over a few quarters. Wages may be micro flexible, but there could still be "macro stickiness" -- which was the subject of this post.

Monday, May 16, 2016

Recognizing complexity by inspection

Eric Liu (speechwriter) and Nick Hanauer (business person) have a new article at Evonomics that is an excerpt from their book The Gardens of Democracy. Obviously, they have the requisite skills to identify a complex nonlinear system by inspection:
Traditional economic theory is rooted in a 19th- and 20th-century understanding of science and mathematics. At the simplest level, traditional theory assumes economies are linear systems filled with rational actors who seek to optimize their situation. Outputs reflect a sum of inputs, the system is closed, and if big change comes it comes as an external shock. The system’s default state is equilibrium. The prevailing metaphor is a machine. 
But this is not how economies are. It never has been. As anyone can see and feel today, economies behave in ways that are non-linear and irrational, and often violently so. These often-violent changes are not external shocks but emergent properties—the inevitable result—of the way economies behave.
Yes, it's so violent:

For reference, let's look at an actual complex biological system (Lynx population with predator-prey dynamics):

If the US economy was as violent changes as the population dynamics of a Lynx, the economy would have collapsed to approximately zero GDP and sprung back again [1]. During the past 70 years, the US economy has not received a quarterly shock of more than -10% (and that's annualized — equivalent to an actual quarterly shock of less than -2.5%). Shocks on the order of a few percent mean the economy is well within the realm of perturbation theory.

Whatever your theory of how economics works ...

\frac{\Delta NGDP}{NGDP} \approx 0

is an excellent starting point. Do I have to show this graph again:


Footnote added 5 June 2018

[1] Part of what defines a dynamical system's chaos is that it visits nearly all of its phase space and nearby elements separate exponentially in time. 

What happens when you push on a price?

Ed. note: This post has been sitting around because I never found a satisfying answer. However, this post from John Handley inspired a comment that led to a more scientific take on it.
A lot of economics deals with situations where some entity impacts a market price: taxes, subsidies, or interest rates in general with a central bank. With the information equilibrium picture of economics, it's easy to say what happens when you change demand or supply ... the price is a detector of information flow.

For my thought experiments, I always like to think of an ideal gas with pressure $p$, energy $E$ and volume $V$ (analogous to price $p$, demand $D$ and supply $S$, respectively):

p = k \frac{E}{V}

How do I increase the pressure of the system? Well, I can reduce $V$ or increase $E$ (raise temperature or add more gas). One thing is for certain: grabbing the pressure needle and turning it will not raise the pressure of the gas! (This is like Nick Rowe's thought experiment of grabbing the speedometer needle).

... at least under the conditions where the detector represents an ideal probe (the probe has minimal impact on the system ... like that pressure gauge or speedometer needle). But our probe is the market itself -- it is maximally connected to the system. Therefore when you push on a price (through a regulation, tax, minimum wage, or quota system), it does impact supply and/or demand. The and/or is critical because these impacts are observed to be empirically different.

Since we don't know, we have to plead ignorance. Therefore price dynamics (for a short time and near equilibrium with $D \approx D_{eq}$ and $S \approx S_{eq}$) should follow:

\frac{dp}{dt} = & a_{0} + a_{1} t + o(t^{2})\\
& + d_{10} (D - D_{eq}) + o(D^{2})\\
& + s_{10} (S - S_{eq}) + o(S^{2})\\
& + d_{11} \frac{d}{dt} (D - D_{eq}) + o(D^{2})\\
& + s_{11} \frac{d}{dt} (S - S_{eq}) + o(S^{2})\\
& + c_{20} (D - D_{eq})(S - S_{eq}) + o(D^{2}S^{2})

This gives us an excellent way to organize a lot of effects. The leading constant coefficient would be where un-modeled macroeconomic inflation would go (it is a kind of mean field approximation). Entering into $a_{0}$ and $a_{1}$ would be non-ideal information transfer -- movements in the prices that have nothing to do with changes in supply and demand. Interestingly, these first terms also contain expectations.

The next terms do not make the assumption that $D_{eq} = S_{eq}$ or that they even adjust at the same rate. This covers the possibilities that demand could perpetually outstrip supply (leading to market-specific inflation -- housing comes to mind), and that demand adjust to price changes faster than supply does (or vice versa). For example, demand for gasoline is fairly constant for small shifts in price, so price changes reflect changes in supply ($d_{10} \approx 0$). If you think pushing on a price moves you to a different equilibrium, then you might take $X_{eq} = X_{eq}(t)$, but we'll assume $dX_{eq}/dt = 0$ for now.

Basically, your theory of economics determines the particular form of the expansion. The "Walrasian" assumption (per John Handley's post) is that $D = S$ always. Adding rational expectations of (constant) inflation leaves you with the model:

\frac{dp}{dt} = a_{0}

Assuming information equilibrium yields a non-trivial restriction on the form of the expansion (see e.g. here for what happens when you add time to the information equilibrium condition). We obtain (taking $X - X_{eq} \equiv \Delta X$):

\frac{dp}{dt} = \frac{k}{S_{eq}} \frac{dD}{dt} - k \frac{\Delta S}{S_{eq}^{2}} \frac{dD}{dt} - k \frac{\Delta D}{S_{eq}^{2}} \frac{dS}{dt} + \cdots

We find that almost all of the terms in the expansion above have zero coefficients. The leading term would be $d_{11} = k/S_{eq}$. The next terms would be the $c_{21}$ terms -- second order cross terms with one time derivative. Including only the lowest order terms and adding back in the possibility of non-ideal information transfer, we have

\frac{dp}{dt} = a_{0} + a_{1} t + \frac{k}{S_{eq}} \frac{dD}{dt}

All small price changes are due to (temporal) changes in demand or non-ideal information transfer! Integrating (dropping the higher order time term):

p(t) - p(t_{0}) = a_{0} (t-t_{0}) + \frac{k}{S_{eq}} (D(t) - D(t_{0}))

This means when you push on a price, at least to leading order, you impact demand (or cause non-ideal information transfer). It also has the opposite sign you might expect. An increase in price would increase demand! Note that this assumes general equilibrium (where demand and supply both adjust quickly to changes). But in general equilibrium, increasing demand means increasing supply as well, so we can understand the result that way. It could also be the case that nominal demand ($D$) goes up while real demand ($D/p$) goes down depending on the value of the coefficients.

If we assume demand adjusts slowly ($dD/dt \approx 0$), then we get the "Econ 101" result (returning to information equilibrium) where an increase in price reduces demand, assuming supply is increasing (e.g. economic growth):

\frac{dp}{dt} = - k \frac{\Delta D}{S_{eq}^{2}} \frac{dS}{dt}

For information equilibrium to reproduce the Econ 101 result that a tax increase reduces demand, you have to assume 1) information transfer is ideal, 2) demand changes slowly, and 3) economic growth ... or instead of 1-3, just assume non-ideal information transfer. Therefore the simplest explanations of the standard Econ 101 impacts of pushing on a price would actually be a decline in real demand or breaking information equilibrium.

This is not to say these assumptions aren't valid -- they could well be. It's just that there are a lot of assumptions at work whenever anyone tells you what the effects of changing a price are.

Saturday, May 14, 2016

Update of UK inflation prediction

There's nothing particularly exciting about this prediction, but I am updating it because I put it out there. New data is in black -- since I was unable to find seasonally adjusted data (and I still haven't bothered to build a filter myself) I tried to remove the seasonal adjustment by a LOESS filter. It's still pretty messy.

Exit through the hyperinflation, redux

I realized in going back to my post Exit through the hyperinflation about jumping from the general equilibrium solution to the accelerating inflation solution (hyperinflation) of the information equilibrium conditions as part of this discussion of WWII price controls that the original post used the full monetary base (including reserves).

This approach turned out to be wrong, but I never updated these earlier graphs. So here they are:

I added a year-over-year inflation graph. The major deviations are the WWII price controls and the oil crises in the 1970s.

This is what confirmation bias looks like (minimum wage edition)

"If a man is offered a fact which goes against his instincts, he will scrutinize it closely, and unless the evidence is overwhelming, he will refuse to believe it. If, on the other hand, he is offered something which affords a reason for acting in accordance to his instincts, he will accept it even on the slightest evidence."
Bertrand Russell

I usually use Marginal Revolution as a source for ideas for things I can try to represent with random agents (it nearly always works), but yesterday they put forward a rather ridiculous study [pdf]. Here's the first line of the abstract:
We study the effect of the minimum wage on labor market outcomes for young workers using U.S. county-level panel data from the first quarter of 2000 to the first quarter of 2009.

Hmm. First quarter of 2009? I think I remember something big happening around that time. Mobile Prudential Vices? No, that's not it ...

Later in the paper ...
Removing the recessions that bracket our full sample from the data reduces the estimated minimum wage responses of earnings and employment for the youngest workers

So they show the result with data only from 2004 to 2007. But look at what removing the periods containing the recessions does ...

... it removes most of the minimum wage increases. All the bolded entries are minimum wage increases. Well, almost all. There are a few 5.15's randomly bolded in the middle of the table, as well as a few other mistaken entries. I only guessed what the bolding meant because they never say explicitly in the paper.

And it's not just the states that raise the minimum wage. The control group consists of states that don't raise the minimum wage -- what kind of states are those? That doesn't sound like a good control group as there is definitely some clustering involved in various measures. [Cough ... red states .. cough.]

Also there is a general trend in falling youth employment that has been happening over the entire period of their study:

This is never mentioned (they mention idiosyncratic trends and heterogeneous trends, but not the overall one). Note that keeping only the data between 2004 and 2007 reduces the impact (as noted above) and also keeps only the data where youth employment-population ratio is stable in the graph above.

Obviously this study has sufficient power and controls such that the natural experiments are good enough to overcome the Card and Krueger study. Marginal Revolution seems to have a preference for citing studies that come to the opposite conclusion as Card and Krueger. I wonder why ...

In the information equilibrium model, the minimum wage ... well, it's not the domain of Economics 101 (see here, here, here or here). Whatever it is, it's more complex than simple regressions can tease out.



I wanted to say something about this from Marginal Revolution the other day:
Firms have an incentive to coordinate the outcome of their randomizations, as coordination allows them to load the firing probability on states of the world in which it is costlier for workers to become unemployed and, hence, allows them to reduce overall agency costs. In the unique equilibrium, firms use a sunspot to coordinate the randomization outcomes and the economy experiences endogenous and stochastic aggregate fluctuations.

So employers time their firings to when the labor market sucks in order to make the firing worse. Basically, employers are the worst kind of sociopaths imaginable.

This is in line with another example of an attempt to use "free market" reasoning to describe what happens in the economy -- this time from Scott Sumner (that I mentioned before here):
For instance, after unemployment compensation returned to the usual 26 weeks in early 2014, job growth accelerated.

But the reason job growth accelerated was that there were more job offers, not more seekers. The model would be that employers waited until extended unemployment insurance ran out so that job seekers would be more desperate. As I said, sociopaths.

Hey, "job creators" -- are you sure you want these people defending your interests? You know they're making you out to be sociopaths right?


The real reason is not that extended unemployment insurance ended, but rather Obamacare lead to a surge in hiring in the health care sector.

Friday, May 13, 2016

WWII price controls and models

I got in an argument with an anonymous commenter on my previous post as is my wont. He (it is probably a he) made the claim that post-WWII inflation was due to relaxing price controls and had nothing to do with my information equilibrium monetary model where the lack of Fed independence and interest rate pegging (and the failure of the general equilibrium solution [GE] to fit the data) meant we might want to apply the accelerating inflation solution [AI]. The lack of Fed independence lasts from 1942 to 1951 (until the Treasury-Fed accord), therefore the AI solution should be restricted in scope to that time period. Before and after that time, the GE solutions apply (see here).

Note that the AI solution doesn't mean inflation is high, it just means that monetary expansion becomes more and more inflationary as time goes on. If you keep monetary expansion at 5%, inflation might be 5% one year, 10% the next, and 20% the next. However, you could keep inflation at 5% over the entire time by reducing the monetary expansion -- e.g. from 5% to 2.5% to 1.25%.

The price control model (the commenter refuses to believe it is a model) would apply from when the price controls went into place in 1943 to 1946 (per this paper by Paul Evans), with a model-dependent burst of inflation afterwards (1946-1948).

What is interesting is that you can see the impact of price controls on the AI solution:

The AI solution seems to be valid from 1942 until the 1960s (at which point the GE solution that I usually show takes over). From 1943 to 1946, price level growth is about half what you'd expect from the AI solution of the information equilibrium model. When the controls turn off, you get a rise back up to where you'd expect from the AI solution. You can see this is a perturbation (a small model effect) to the accelerating inflation (hyperinflation) information equilibrium model (which covers a much longer period).

You can see this much clearer in the YoY CPI inflation data:

The deficit of inflation (red) is made up (blue) when the price controls are removed. Now I'd say that the accelerating inflation model "explains" WWII and post-WWII inflation -- the only way you can understand the price level effect is with a model for where the price level should be in the absence of price controls. Price controls themselves are only a small part of the story. In fact, without some underlying model, you have no idea how big the price control effects are. Here, we can say inflation is about half what it would be while the price controls are in effect and about three times higher after you turn them off. Without that, you have no way of knowing when one theory applies or how much of what we observe is explained by which theory.

Economists (and my anonymous commenter) tend to treat the model above as two separate models, when in fact they can both be active at the same time. Dani Rodrik would say there is a theory for when the price controls are in effect and one when they're not. But what we really have is one theory that is in scope for the entire domain, and a second perturbation for when there are price controls.

Thursday, May 12, 2016

I'm not sure we understand inflation

Does anyone remember when there was an argument on the internet as to whether an airplane on a treadmill could take off? This is what I feel like whenever I hear any discussion of inflation and hyperinflation.

To some degree it makes sense if you print a lot of money, inflation should result. It also makes sense that if you print a lot of money and people expect it to be taken out of circulation soon, inflation won't result. And I guess you can find a way to rationalize that if you print money to buy up debt, you won't get inflation because money is a government liability just like debt.

This really hinges on two questions that economics has not solved: What is money? and What is inflation? ... and there's a third that economics never asks: What does the data say?

Generally, we get what inflation is. When a pint of blueberries is €4 one day and is €4.20 a year later, you've experienced 5% annual inflation. But what about an iPhone? They've not changed much in price, but they've gotten more capable. There's a quality adjustment. Because of improvements in various parts of the supply chain, blueberries are in better shape at the store and therefore more delicious (to those that like them) -- and so might get a hedonic adjustment.

You also get inflation (in modern macro) if everyone thinks the price of blueberries and iPhones will go up because everyone expects inflation. You don't even have to "print money" for this to happen. As for what "money" is, well, that's a completely different -- and not understood -- story.

This came up because I saw a Bloomberg piece by Noah Smith and a response by Cullen Roche. It feels like the airplane on the treadmill. Noah first:
Many economists believe that if you print a lot of money, inflation will go way up. That makes sense, since usually if you increase the supply of something its value falls -- inflation is just a decrease in the value of a currency in terms of real goods and services. ... But many people now believe that the danger of hyperinflation isn’t as big as economists believed in the past. The Fed doesn’t actually control the money supply -- it’s controlled by banks. If Fed money creation is balanced out by private banks withdrawing money from the economy, then money-printing almost certainly won’t cause hyperinflation. This is exactly what has been happening in the past few years. As the Fed has created unprecedented amounts of money through asset purchases under its quantitative easing program and swelling the monetary base, the money supply has increased at a modest and steady pace ... And even if the money supply does increase, inflation still might not result -- people might spend their money less frequently, leading to muted pressure on prices. ... Hyperinflation, like a stock-market crash or a bank run, is a phenomenon that depends crucially on people’s expectations of what other people will do. If everyone thinks that no one else will spend their dollars, inflation stays low. But if some people start to believe that other folks are about to go out and spend their stockpiles of cash, they will respond by doing the same, so they can buy things before prices start to rise. That will turn inflation into a self-fulfilling prophecy. And just as with bank runs and stock market crashes, we know that expectations can shift very quickly and catastrophically. Hyperinflation is like a bank run on a national currency.
Note that Noah Smith invokes both the supply and demand argument as well as the expectations argument. Increasing money supply causes the value to fall, but if money supply is expected to increase (even without a supply increase) you can still get inflation! In a sense, inflation is over-determined. He also refers to both the monetary base and M2, even though neither is really correlated with inflation directly.

Cullen tells us that [high] inflation due to monetary policy is unpossible [in our current situation under current law -- Ed. see comment below]:
If you worked through the accounting and the scenario analysis of the flow of funds, high inflation just didn’t add up. ... QE is just an asset swap. ... Now, the US Treasury could print up its own notes (as it has done at times and assuming law changes) and retire the national debt. But this is just an accounting gimmick which changes one government liability (a bond note) for another (a cash note). The quantity of government liabilities doesn’t change, they simply get relabeled. ... [certain fiscal policy] would result in a larger deficit. Which is exactly what the economy needs today! We’re living in a time of extraordinarily low inflation, a shortage of safe financial assets, weak household balance sheets and a period where monetary policy is obviously weak. We need an increase in fiscal policy ... We’ve been running about a -2.5% deficit the last few years with disinflationary trends so I suspect that we could easily run a larger deficit without causing very high inflation.
Much like the airplane on the treadmill, there are completely different implicit models at work here. For Noah, inflation is related to monetary policy. For Cullen, it is related to fiscal policy.

One thing to note is that the idea that the money supply is correlated with the price level is well founded for high inflation, for an example see here. Interestingly, this is exactly the regime where the evidence that increasing government debt is correlated with inflation comes from. That is to say our evidence that fiscal or monetary policy can lead to inflation comes from the same high inflation regimes. At low inflation, changes in the monetary base (QE in US and Japan), M2 (generally rejected), or government debt (again, see the US and Japan) are not correlated with inflation. The scientific thing to do would be to assume scope conditions: the fiscal and monetary theories of inflation apply only at high inflation. It would then be irresponsible to extrapolate the impact of fiscal or monetary policy on a low inflation economy.

So we have three questions:
  1. What is inflation?
  2. What is money?
  3. What is empirically valid for low inflation?
I don't necessarily have a definitive answer for all of these questions, but I'd like to outline how one would go about tackling them in the information equilibrium framework.

What is inflation?

So what is the price (detector) that represents inflation? Is it the price of all goods in the economy? Generically, we'd tackle that with an AD/AS model $P : AD \rightleftarrows AS$, but let's rewrite it as a two step process -- demand, money, supply -- $P_{1} : AD \rightleftarrows M$ and $P_{2} : M \rightleftarrows AS$

We have:

P_{1} \; P_{2} = \frac{dAD}{dM} \; \frac{dM}{dAS} = k_{1} \; k_{2} \; \frac{AD}{M} \; \frac{M}{AS}

equivalent to our original market $P : AD \rightleftarrows AS$

P_{1} \; P_{2} = \frac{dAD}{dAS}  = k_{1} \; k_{2} \; \frac{AD}{AS}

i.e. $P = P_{1} P_{2}$ and $k = k_{1} k_{2}$

In the first market, increasing $M$ causes $P_{1}$ to go down, holding $AD$ constant; in the second market increasing $M$ causes $P_{2}$ to go up, holding $AS$ constant. These would offset each other leading to no change -- because you'd be holding both $AD$ and $AS$ constant.

But can you hold $AD$ constant while changing $M$? Yes, but only if inflation is low. Actually, if inflation is low, you recover the IS-LM model. If inflation is high, an increase in $M$ causes an increase in $AD$.

I realize that someone might want to jump in here and say: If $M$ increases, then inflation is going to be high so the low inflation limit never happens. The problem is that statement assumes the model it purports to prove (increases in $M$ causes inflation), so we need to check that empirically.

What is money?

So what is that $M$ in the previous model? That's another question that should be left to empirical analysis. The best answer I've found is "M0", i.e. the monetary base (MB) minus reserves. This is roughly equivalent to printed currency and minted coins [1]. After I did that comparison, it became more clear that the monetary base reserves are not "money" (at least when considering inflation) using data from several countries.

It turns out that M2 may only be good for exchange rates -- which aren't necessarily related to inflation (rather M2 is related to inflation when inflation is high, so again Noah is using the M2 indicator out of scope).

However, the monetary base (including reserves) is "money" if you look at short term interest rates. That brings us to the third question ...

What is empirically valid for low inflation?

The only empirical result that seems to be valid across different monetary policy regimes (that even include possible WWII hyperinflation) and many years are for interest rates. See here for the Great Depression (the last "zero bound"/"liquidity trap" era), and here for the model covering the 1920s through today. That is to say the scope of the interest rate model covers from low to high inflation.

The model itself is relatively simple

p_{M} : AD & \rightleftarrows M\\
r_{M} & \rightleftarrows p_{M}

This model basically says the interest rate is in information equilibrium with the price of money. Contrary to what happens in economics typically, the model doesn't say the rate is the the price of money. If $M = M0$, then $r_{M0} = r_{long}$ is the long term interest rate (e.g. 10-year). If $M = MB$, then $r_{MB} = r_{short}$ is the short term interest rate (e.g. 3-month). However $M = MZM$ also works for the long term interest rate.

Here is what it looks like:

And here is another view (NGDP = AD) that I used to show more often:

You can see that M0 follows MB for most of the available history -- which means there isn't much to distinguish them. However as QE was put in place, interest rates fell and we got a strong separation.

If the Fed uses short term interest rates to target inflation, the MB line (light blue) can be used to push the M0 line (darker blue) around a bit. But it appears if it gets too far away (to the right, or below, depending on which graph you're looking at), M0 will only grow so fast. Since M0 is related to inflation, if MB is too large, short term interest rates have no effect on inflation.

One thing that has come out of looking at these models is that hyperinflation (or really high inflation) appears to be associated with pegged interest rates (see here, here or here). Under hyperinflation, the information equilibrium model looks a bit like a chaotic dynamical system with exponential separation of elements of phase space (the equations are roughly the same).

It's the pegged interest rates (or exchange rates) that appear to be key to hyperinflation, not monetary base (MB) growth, printing money (M0) or deficits themselves. The latter seem to happen in conjunction with high inflation, but aren't necessarily the cause -- nor should you extrapolate those models outside of their scope. When you peg an interest rate or an exchange rate, you are breaking an information transfer channel by fixing a price.



[1] If you'd like to mention vault cash or that printing bank notes doesn't cause inflation, but happens in reponse to it, I will listen to you if you show me a series of quantitative empirical success that rival the ones on this page.

Wednesday, May 11, 2016

Lyapunov exponents and the information tranfer index

As some of you may know, I'm in the process of writing a paper with frequent commenter (and MD) Todd Zorick on applying the information equilibrium model to neuroscience -- in particular: can you distinguish different states of consciousness by different information transfer indices that characterize EEG data? It's been something of a slog, but one reviewer brought up the similarity of the approach with a "scale dependent Lyapunov exponent" [pdf].

You can consider this post a draft of a response to the reviewer (and Todd, feel free to use this as part of the response), but I thought it was interesting enough for everyone following the blog. Let's start with an information equilibrium relationship $A \rightleftarrows B$ between an information source $A$ and an information destination (receiver) $B$ (see the paper for more details on the steps of solving this differential equation):

\frac{dA}{dB} = k \; \frac{A}{B}

If we have a constant information source (in economics, partial equilibrium where $A$ moves slowly with respect to $B$), we can say:

\frac{A - A_{ref}}{k A_{0}} \equiv \frac{\Delta A}{k A_{0}} = \log \frac{B}{B_{ref}}

Let's define $B$ and $B_{ref}$ as $B_{A_{ref}+\Delta A}$ and $B_{A_{ref}}$, respectively, and rewrite the previous equation:

B_{A_{ref}+\Delta A} = B_{A_{ref}} \; \exp \left( \frac{\Delta A}{k A_{0}} \right)

This is exactly the form of the Lyapunov exponent [wikipedia] $\lambda$ if we consider $A$ (the information source) to be the time variable and $\lambda = 1/k A_{0}$

B_{t+\Delta t} = B_{t} \; \exp \lambda \Delta t

[Update 13 June 2016: As brought up in peer review, we should consider the $B$ to be some aggregation of a multi-dimensional space (in economics, individual transactions; in neuroscience, individual neuron voltages) because $\lambda$ measures the separation between two paths in that phase space.]

This is interesting for many reasons, not the least of which is that a positive $\lambda$ (and it is typically in the economics case) is associated with a chaotic system. Additionally, the Lyapunov dimension is directly related to the information dimension. (See the Wikipedia article linked above.)

I want to check this out in this context.


In general equilibrium we have

B_{t+\Delta t} = B_{t} \; \exp \left( \frac{1}{k} \; \log \left( \frac{t + \Delta t}{t} \right) \right)

which reduces the the other form for $t \gg \Delta t$ (i.e. short time scales).

Tuesday, May 10, 2016

Lack of metastability

When I issued some predictions two years ago (that turned out well), I put up a speculative piece about what I called "metastability" in the unemployment rate. It was pure speculation that just came from looking at the unemployment rate graph:

Well, it turns out it is wrong. In the original data, there was (almost) never a case where unemployment went through the red or blue metastable level without pausing at it (it did fall through the blue level in the 1950s without pausing). However, the current data shot right through both the red and blue levels:

Here's the overall picture:

So there's probably no metastability.

Monday, May 9, 2016

Doing economists' work, only better: interest rate edition

I thought I'd give an update of this comparison of the information equilibrium (IE) model (green) with the Blue Chip Economic Indicators forecast resulting from a median of 50 private sector economists (red). I kind of eyeballed the the BCEI graph, so click back for the original. It's so far off that there's no reason to be very precise. The IE model is spot on.

Update 13 October 2016

Where is the information encoded?

Bugatti Veyron. Wikimedia commons.

In his book, Cesar Hidalgo has a very nice example of how the information content of a lump of raw materials adds value. He uses a Bugatti Veyron, priced at a few thousand dollars per pound -- at least in its pristine state. In Hidalgo's parable, once a Bugatti is wrecked it is worth much less -- even though it consists of the the same raw materials. Hidalgo uses this example to make the point that its the arrangement of atoms that gives it its value.

However much of that cost is associated with a very low-dimensional subset of those atomic configurations: the ones that make it look different from a VW or a Honda. Especially the part highlighted with the green rectangle in the picture above. I bet a good knock-off made from exactly the same parts could fetch a upwards of 100 thousand dollars -- expensive, but well below the millions that the Bugatti sells for.

As the production costs of clothing have continued to fall, a larger and larger fraction of the value people get from the clothing they buy — especially at the high end of the market — reflects social factors rather than economic ones. Someone might pay €40 for a T-shirt that cost €5 to produce because it carries the label of a famous designer.
This isn't to say that the price isn't encoding information. In this case it's just that the most relevant information is encoded in human brains, not in the Bugatti. The social meaning of a Bugatti, along with patent and trademark protections, create much of the value.


This parable also lets us see the difference between Hidalgo's information theory approach and the one on this blog. For one, the information equilibrium approach cannot tell you about absolute prices -- only relative prices. This should make sense because we could go and change the numbers on all the bills and all the prices and essentially change nothing. Our world has actually run a few of these experiments. In Japan, you get roughly 100 Yen to the US dollar. Before the Euro, it was about 6 French Francs to the US dollar. There is no scale that necessarily sets the scale of the base unit of currency.

If there aren't a large number of any particular good or service being bought and sold, then the information equilibrium approach also doesn't apply. Price dynamics for items like the Bugatti (only a few hundred have been sold) should be considered out of scope.

The information equilibrium approach is about the information entropy of the probability distributions of supply events and demand events. The actual events are only approximately equal to the probability distributions of events if there are a large number of events. Here, you can see the normal distribution become pretty clear after 1000 events:

The main point here is that the price of the Bugatti might not have anything to do with economics, but rather with sociology and psychology. The information equilibrium view is that economics doesn't really exist unless you have a large number of events and ideal information transfer. Outside of that, you are really studying sociology. The dividing point is that the price in the former case is encoded in the distribution of goods and services, while in the latter it is encoded in our collective minds.


PS Slow blogging will continue as I have both the real job and the book are taking up my time (currently at 22,000 words and about ready to say I have a first draft).