Tuesday, February 28, 2017

Nikkei 225

Here's the Nikkei 225 as a dynamic equilibrium model (same as this one on the S&P 500):


The transition is centered at 2000.7 (likely due to the demographic transition).

PS It is interesting that both the Nikkei and S&P 500 have roughly the same average growth rate (neglecting the exogenous shocks) of ~ 11%.

Dynamic equilibrium: the price of gold (and oil)

As I talked about here, information equilibrium models of prices should manifest dynamic equilibria (subjected to exogenous shocks). I tried it on the S&P 500, but I thought I'd try it on a commodity: gold. Here are the results:



The transitions are at 1973.1 (the convertibility shock), 1979.2 (inflation/oil shock?), and 2007.0 (the long post-9/11 shock). What is also interesting is that in the absence of exogenous shocks, the (nominal) price of gold tends on average to fall slowly (about 2.7% per year).

The Mathematica code up on the dynamic equilibrium GitHub repository (FRED data I used is here; I used the monthly averaging option).

...

PS Also, there's this post on gold. And this one.

...

Update 2 March 2017

Here's another commodity (added to title as well): oil ... two different measures (West Texas and Brent)



The transitions are at centered at 2004.6 and 2015.0 (West Texas) or 2005.0 and 2015.1 (Brent). The former could be due to the Iraq war or the global economic boom. The latter is likely the OPEC decision to increase production to make oil shale production in the US and Canada unprofitable.

The rate of growth absent the shocks is not statistically significantly different from zero.

Monday, February 27, 2017

NAIRU and other connections between inflation and employment

The topic of the hour is NAIRU (Non-Accelerating [1] Inflation Rate of Unemployment). Simon Wren-Lewis put forward the mainstream defense of the concept before I left on vacation. In general, the idea is plausible from a rational agent viewpoint: if labor is plentiful, then there is little pressure on wages. At some level of labor scarcity, the bargaining positions of prospective employees will be better leading (so the theory goes) to expectations of higher wages for the same level of productivity. With those higher wages chasing scarce output, prices could be expected to increase. Of course other "stories" are possible, but the basic idea is that unemployment below a certain level leads to higher inflation expecations, leading to higher inflation.

I think Simon accidentally gives us the real rationale:
But few of these attempts to trash the NAIRU answer a very simple and obvious question - how else do we link the real economy to inflation?
That is to say, it seems to me to be a post hoc narrative developed out of the original desire to link the real economy and inflation (i.e. monetary policy). The problem is that we have two completely unobservable variables interacting: NAIRU and inflation expectations. This should raise eyebrows to say the least.

But as I said: NAIRU appears to be an attempt to understand theory, not data. What does the data look like?

To that end, I thought I'd introduce a new "model": the dynamic equilibrium picture of the price level. I've previously shown how prices in information equilibrium models should also manifest dynamic equilibria subjected to shocks (just like ratios of variables in information equilibrium with each other such as the unemployment rate). Applying the same procedure as in the previous link, we obtain a pretty good first order description of the core PCE price level and inflation:



There's a single transition centered around 1978.7; all of the other shocks are dwarfed by this one. However, this one major shock doesn't really match up with any shocks to unemployment:


What this broad shock does line up with is the non-equilibrium process of women entering the workforce:


This view of the data is exactly the same demographic story of 1970s inflation as the "quantity theory of labor" (as well as Steve Randy Waldman's account in terms of baby boomers and women together) [2].

Let's return to our discussion of NAIRU above. The data more plausibly connects the employment-population ratio and inflation than unemployment. In fact, it does not appear (from the data, at least) that the unemployment rate dynamic equilibrium technically has to stop its downward path -- it could proceed towards zero independent of the inflation rate:


This is of course very unlikely because a recession would almost certainly intervene (happening randomly with a time constant of about 8 years). This is a silly counterfactual, but the main point is that unemployment and inflation do not appear to be connected. Based on the dynamic equilibrium view of the existing data, unemployment could head to zero and inflation would remain constant.

[I would like to add that at some low unemployment rate we might run into a "noise floor" of people changing jobs for non-macroeconomic reasons, as I mention in the presentation where this graph originally appears (28 Feb 2017)]

Therefore, while there is a plausible connection between employment and inflation (changes in the employment-population ratio), it's not the story told by NAIRU (inflation and unemployment).

PS See also this Twitter thread.

...

Update 27 Feb 2017

If you squint, you might be able to convince yourself of some residual (anti-)correlation between the unemployment rate and the difference between the dynamic equilibrium model and the data:


However, it is not statistically significant. Even if it did exist, the primary component of 70s inflation would still have nothing to do with the unemployment rate.

Update 27 Feb 2017, the second

Added the code to the GitHub dynamic equilibrium repository.

...

Footnotes

[1] Some Wynne Godley fans seem to think this term is some kind of misnomer, saying "prices" are accelerating. I don't understand it. First off, Godley has explicitly used the inflation acceleration terminology to mean inflation is increasing or decreasing. This is also perfectly in alignment with the terminology in physics: price is a position, inflation is a velocity. We say "velocity accelerates" (for a rate of change of velocity), not "position accelerates".

[2] Now even though women entering the workforce appears to be the best candidate for an explanation of 1970s inflation, that doesn't necessarily nail down the mechanism behind it. Women entering the workforce could have produced a big aggregate demand shock (sort of an AD-AS model explanation). In thinking about the mechanism, I also came up with a sexism theory of inflation: women entering the workforce made men demand to be paid more than women, driving up wages and prices. That is to say the connection between inflation and the employment-population ratio could be entirely sociological (i.e. different in different societies), not economic (i.e. the same for any demographic transition).

Friday, February 17, 2017

Monetary base update

There's still no information on the monetary base drift rate, but basically on the same track:


I think we're seeing the "natural" drift rate per this post.

Generative adversarial networks and information equilibrium


This is kind of a placeholder for some thoughts when I get a bit more time. There was a tweet today about Generative Adversarial Networks (GANs) leading to this Medium post. I immediately retweeted it:
This is very interesting for a couple of reasons. It demonstrates how a simple "price" (the judgments of the discriminator D) can cause the supply distribution (G, the generated distribution) to match up with the demand distribution (R, the real distribution). In this setup, however, the discriminator doesn't aggregate information (as it does in the traditional Hayekian view of the price mechanism), but merely notes that there are differences between G and R (the information equilibrium view). See more here.

As such, this sets up (an abstract) information equilibrium model:

$$
`` \; D \equiv \frac{dR}{dG} = k \; \frac{R}{G} \; "
$$

with D as the detector, except we have a constant information source (R, the real data). I hope to put this in a more formal statement soon. Real markets would also have demand (R) also adjust to the supply (G).

The example shown at the Medium post is exactly the generator attempting to match to a real distribution, which is one way to see information equilibrium operating. Here are the results for the mean and standard deviation of the distribution R:


The other thing I noticed is that there is a long tail towards zero mean in the final result:
Not bad. The left tail is a bit longer than the right, but the skew and kurtosis are, shall we say, evocative of the original Gaussian.


Is this some non-ideal information transfer?

$$
`` \; D \equiv \frac{dR}{dG} \leq k \; \frac{R}{G} \; "
$$

Does this have anything to do with this?


Or this (distribution of stock returns)?


Overall, looks very interesting!

Thursday, February 16, 2017

Invariance and deep properties


I have been looking for a good explanation of the physical meaning of the general form of the invariance of the information equilibrium condition (shown by commenter M). The equation:

$$
\frac{dA}{dB} = k \; \frac{A}{B}
$$

is invariant under transformations of the form:

$$
\begin{align}
A \rightarrow & \alpha A^{\gamma}\\
B \rightarrow & \beta B^{\gamma}
\end{align}
$$

The physical (well, economic) meaning of this is that the information equilibrium condition is invariant under transformations that leave the ratio of the local (instantaneous) log-linear growth rates of $A$ and $B$ constant. This is because

$$
\frac{d}{dt} \log \alpha A^{\gamma} = \gamma \frac{d}{dt} \log A
$$

and likewise for $B$. Among other things, this preserves the value of information transfer index which means that the information transfer index is the defining aspect of the information equilibrium relationship.

This is interesting because the IT index determines a "power law" relationship between $A$ and $B$. Power law exponents will often have some deep connection to the underlying degrees of freedom. Some vastly different physical systems will behave the same way when they have the same critical exponents (something called universality). Additionally, $k$ is related to Lyapunov exponents which also represent an important invariant property of some systems.

This is to say that the information equilibrium condition is invariant under a transformation that preserves a key (even defining) property of a variety of dynamical systems.

(This is why physicists pay close attention to symmetries. They often lead to deep insights.)

From here.

Qualitative analysis done right, part 2b



John Handley asks via Twitter, "[W]here do models like [Eggertsson and Mehrotra] fit into your view of quantitative, qualitative, and toy models?"

I think my answer would have to be a qualitative model, but an unsatisfying one. The major problem is that it is much too complex. However, a lot of the complexity comes from the "microfoundations" aspects, the result of which is exactly as put by Mean Squared Errors:

Consider the macroeconomist. She constructs a rigorously micro-founded model, grounded purely in representative agents solving intertemporal dynamic optimization problems in a context of strict rational expectations. Then, in a dazzling display of mathematical sophistication, theoretical acuity, and showmanship (some things never change), she derives results and policy implications that are exactly what the IS-LM model has been telling us all along. Crowd -- such as it is -- goes wild.
Except in this case it's the AD-AS model. The IS-LM model is already a decent qualitative model of a macroeconomy when it is in a protracted slump, and what this paper does is essentially reproduce an AD-AS model version of Krugman's zero-bound/liquidity trap modification of the IS-LM model [pdf]. This simple crossing curves (e.g. shown above) are far simpler and tell basically the same story as the "microfounded" model.

The model does meet the requirement of being qualitatively consistent with the data. For example, it is consistent with a flattening Phillips curve:
This illustrates a positive relationship between inflation and output - a classic Phillips curve relationship. The intuition is straightforward: as inflation increases, real wages decrease (as wages are rigid) and hence the firms hire more labor. Note that the degree of rigidity is indexed by the parameter γ. As γ gets closer to 1, the Phillips curve gets flatter ...
This is observed. The model also consists of stochastic processes:
An equilibrium is now defined as set of stochastic processes ...
This is also qualitatively consistent with the data (in fact, pure stochastic processes do rather well at forecasting).

Wednesday, February 15, 2017

Behavioral Euler equations and non-ideal information transfer


I read through Noah Smith's question and answer session on Reddit. I liked this quote:
But I doubt the Post-Keynesians will ever create their own thriving academic research program, and will keep on influencing the world mainly through pop writing and polemics. I think they like it that way.
But on a more serious note, Noah linked (again) to a paper [pdf] by Xavier Gabaix about a "behavioral" New Keynesian model. One of the things Gabaix introduces is an "attention parameter" $M$ in the Euler equation as a way of making it conform to reality more.

Let's quote Noah's response to a question about his post on the Euler equation:
An Euler equation is a mathematical relationship between observables - interest rates, consumption, and so on. 
There are actually infinite Euler equations, because the equation depends on your assumption about the utility function. 
The problem is that standard utility functions don't seem to work. Now, you can always make up a new utility function that makes the Euler equation work when you plug in the observed consumption and interest rate data. Some people call this "utility mining" or "preference mining". One example of this is Epstein-Zin preferences, which have become popular in recent years. 
The problem with doing this is that those same preferences might not work in other models. And letting preferences change from model to model is overfitting. So another alternative is to change the constraints - add liquidity constraints, for example. So far, there's lots of empirical evidence that liquidity constraints matter, but very few macro models include them yet. 
Another, even more radical alternative is to change the assumptions about how agents maximize. This is what Gabaix does, for example, in a recent paper [linked above] ...
Gabaix comes up with a modified Euler equation (log-linearized):

$$
x_{t} = M E_{t} [x_{t+1}] - \sigma \left(i_{t} - E_{t}[\pi_{t+1}] - r_{t}^{n} \right)
$$

Now I derived this piece of the New Keynesian model from information equilibrium (and maximum entropy) a few months ago. However, let me do it again (partially because this version has a different definition of $\sigma$ and is written in terms of the output gap).

I start with the maximum entropy condition for intertemporal consumption with a real interest rate $R$ such that:

$$
C_{t+1} = C_{t} (1+R_{t})
$$

If we have the information equilibrium relationship $Y \rightleftarrows C$ (output and consumption) with IT index $\sigma$, we can say $Y \sim C^{\sigma}$ and therefore, after log-linearizing (and substituting the nominal interest rate and inflation):

$$
\begin{align}
\frac{1}{\sigma} y_{t} = & \frac{1}{\sigma} y_{t+1} -  r_{t}\\
y_{t} = & y_{t+1} - \sigma r_{t}\\
y_{t} = & y_{t+1} - \sigma \left(i_{t} - E_{t}[\pi_{t+1}] \right)\\
x_{t} = & x_{t+1} - \sigma \left(i_{t} - E_{t}[\pi_{t+1}] - r_{t}^{n}\right)
\end{align}
$$

where in the last step we rewrote output in terms of the output gap and deviation from the interest rate $r^{n}$.

You may have noticed that the $E$'s are missing. That's because the derivation was done assuming information equilibrium. As I show here, this means that we should include an "information equilibrium" operator $E_{I}$ (think of it as an expectation of information equilibrium):

$$
x_{t} = E_{I} x_{t+1} - \sigma \left(i_{t} - E_{I} \pi_{t+1} - r_{t}^{n} \right)
$$

Under conditions of non-ideal information transfer, we'd actually expect that

$$
x_{t+1} \leq E_{I} x_{t+1}
$$

(you cannot receive more information than is sent). Therefore, in terms of rational expectations (model-consistent expectations), we'd actually have:

$$
x_{t+1} = M E_{I} x_{t+1} = M E_{t} E_{I} x_{t+1} = M E_{t} x_{t+1}
$$

with $M \leq 1$. Back to Gabaix:
In the Euler equation consumers do not appear to be fully forward looking: M < 1. The literature on the forward guidance puzzle concludes, plausibly I think, that M < 1.
We recover a "behavioral" model that can be understood in terms of non-ideal information transfer.

Dynamic equilibrium: Australia's unemployment rate

I applied the dynamic equilibrium model to the Australian unemployment rate, and it works out fairly well. However one of the interesting things is that FRED only has data up until February 2015 (as of 15 Feb 2017), so fit and the forecast to 2018 was based on data up until then. This showed a strange feature of steadily rising unemployment starting in 2012 which doesn't necessarily fit with the model. The parameter fit said that it was the middle piece of a broad shock that was ending so that the forecast projected a decline in the unemployment rate through the rest of 2015 and 2016. I then went back and scraped the data from the ABS website up until December 2016 and the forecast does remarkably well [1] (shown in black).



The shock locations are 1982.6, 1991.2, 2009.0 (the global financial crisis), and 2013.5. Although there is no "recession" between 1991 and 2009, there are some fluctuations in the unemployment rate (possibly Scott Sumner's missing mini-recessions?) – I could probably change the bin size on the entropy minimization and the code would recognize those as recessions as well. However as a broad brush view of the Australian economy the four shocks seem sufficient.

I do wonder about the source and explanation of the shock centered at 2013.5 ‒ it appears broader than the typical recession. Possibly a delayed response to the ECB-caused Eurozone recession of 2011-2012?

Footnotes

[1] Instead of being a true "blind" out-of-sample forecast, the data was really just "inconvenient" out-of-sample. [Ha!]

A bit more on the IT index (technical)


There are a couple of loose ends that need tying up regarding the IT index. One of which is the derivation of the information equilibrium condition (see also the paper) with non-uniform probability distributions. This turns out to be relatively trivial and only involves a change in the IT index formula. The information equilibrium condition is

$$
\frac{dA}{dB} = k \; \frac{A}{B}
$$

And instead of:

$$
k = \frac{\log \sigma_{A}}{\log \sigma_{B}}
$$

with $\sigma_A$ and $\sigma_B$ being the number of symbols in the "alphabet" chosen uniformly, we have

$$
k = \frac{\sum_{i} p_{i}^{(A)} \log p_{i}^{(A)}}{\sum_{j} p_{j}^{(B)} \log p_{j}^{(B)}}
$$

where $p_{i}^{(A)}$ and $p_{j}^{(B)}$ represent the probabilities of the different outcomes. The generalization to continuous distributions is also trivial and is left as an exercise for the reader.

However, while it hasn't come up in any of the models yet, it should be noted that the above definitions imply that $k$ is positive. But it turns out that we can handle negative $k$ by simply using the transformation $B \rightarrow 1/C$ so that:

$$
\begin{align}
\frac{dA}{dB} = & - |k| \; \frac{A}{B}\\
-C^{2} \frac{dA}{dC} = & - |k| \; \frac{AC}{1}\\
\frac{dA}{dC} = & |k| \; \frac{A}{C}
\end{align}
$$

That is to say an information equilibrium relationship $A \rightleftarrows B$ with a negative IT index is equivalent to the relationship $A \rightleftarrows 1/C$ with a positive index.

Tuesday, February 14, 2017

Qualitative economics done right, part 2a

When did insisting on comparing theory to data become anything other than incontrovertible? On my post Qualitative economics done right, part 2, I received some push back against this idea in comments. These comments are similar to comments I've seen elsewhere, and represent a major problem with macroeconomics embodied by the refrain the data rejects "too many good models":
But after about five years of doing likelihood ratio tests on rational expectations models, I recall Bob Lucas and Ed Prescott both telling me that those tests were rejecting too many good models.
The "I"in that case was Tom Sargent. Now my series (here's Part 1) goes into detail about why comparison is necessary even for qualitative models. But let me address a list of arguments I've seen that are used against this fundamental tenet of science.

"It's just curve fitting."

I talked about a different aspect of this here. But the "curve fitting" critique seems to go much deeper than a critique of setting up a linear combination of a family of functions and fitting the coefficients (which does have some usefulness per the link).

Somehow any comparison of a theoretical model to data is dismissed as "curve fitting" under this broader critique. However this fundamentally misunderstands two distinct processes and I think represents a lack of understanding of function spaces. Let's say our data is some function of time d(t). Now because some functions fₐ(t) form complete bases, any function d(t) (with reasonable caveats) can be represented as a vector in that function space:

d(t) = Σ c f(t)

An example is a Fourier series, but given some level of acceptable error any finite set of terms {1, t, t², t³, t⁴ ...} can suffice (like a Taylor series, or linear, quadratic, etc regression). In this sense, and only in this sense, is this a valid critique. If you can reproduce any set of data, then you really haven't learned anything. However, as I talk about here, you can constrain the model complexity in an information-theoretic sense.

However, this is not the same situation as saying the data is given by a general function f(t) with parameters a, b, c, ...:

d(t) = f(t|a, b, c, ... )

where the theoretical function f is not a complete basis and where the parameters are fit to the data. This is the case of e.g. Planck's blackbody radiation model or Newton's universal gravity law, and in this case we do learn something. We learn that the theory that results in the function f is right, wrong or approximate.

In the example with Keen's model in Part 2 above, we learn that the model (as constructed) is wrong. This doesn't mean debt does not contribute to macroeconomic fluctuations, but it does mean that Keen's model is not the explanation if it does.

A simply way to put this is that there's a difference between parameter estimation (science) and fitting to a set of functions that comprise a function space (not science).

"It shows how to include X."

In the case of Keen, X = debt. There are two things you need to do in order to show that a theoretical model shows how to include X: it needs to fail to describe the data when X isn't included, and it needs to describe the data better when X is included.

A really easy way to do this with a single model is to take X → α X and fit α to the data (estimate the parameter per the above section on "curve fitting"). If α ≠ 0 (and the result looks qualitatively like the data overall), then you've shown a way to include X.

Keen does not do this. His ansatz for including debt D is Y + dD/dt. It should be Y + α dD/dt.

"It's just a toy model."

Sure that's fine. But toy models nearly always a) perform qualitatively well themselves when compared to data, or b) are  easy versions of much more complex models where the more complex model has been compared to data and performed reasonably well. It's fine if Keen's debt model is a toy model that doesn't perform well against the data, but then where is the model that performs really well that it's based on?

"It just demonstrates a principle."

This is similar to the defense that "it's just a toy model", but somewhat more specific. It is only useful for a model to demonstrate a principle if that principle has been shown to be important in explaining empirical data. Therefore the principle should have performed well when compared to data. I used the example of demonstrating renormalization using a scalar field theory (how it's done in some physics textbooks). This is only useful because a) renormalization was shown to be important in understanding empirical data with quantum electrodynamics (QED), and b) the basic story isn't ruined by going to a scalar field from a theory with spinors and a gauge field.

The key point to understand here is that the empirically inaccurate qualitative model is being used to teach something that has already demonstrated itself empirically. Let's put it this way:


After the churn of theory and data comes up with something that explains empirical reality, you can then produce models that capture the essence of the theory that captures reality. Or more simply: you can only use teaching tools after you have learned something.

In the above example, QED was the empirical success that lead to using scalar field theory to teach renormalization. You can't use Keen's models to teach principles because we haven't learned anything yet. As such, Keen's models are actually evidence against the principle (per the argument in curve fitting above). If you try to construct a theory using some principle and that theory looks nothing like the data, then that is an indication that either a) the principle is wrong or b) the way you constructed the model with the principle is wrong.

Sunday, February 12, 2017

Added some models to the repository ...

Nominal growth rate from the Solow model. Result is in the code repository linked below.

I added the simple labor model (Okun's law), Solow model, and the "Quantity Theory of Labor and Capital" (QTLK) to the GitHub information equilibrium repository:

https://github.com/infotranecon/informationequilibrium

Let me know if you can see these files. They're Mathematica notebooks (made in v10.3).


Friday, February 10, 2017

Classical Econophysics

Figure from Classical Econophysics


That's the title of a book co-authored by Ian Wright (see here) that looks at "statistical equilibrium". Here's Ian (from the link):
The reason we can be a bit more optimistic [about understanding the economy] is that some very simple and elegant models of capitalist macrodynamics exist that do a surprisingly effective job of replicating empirical data. ... I co-authored a book, in 2009, that combined the classical approach to political economy (e.g., Smith, Ricardo, Marx) with the concept of statistical equilibrium more usually found in thermodynamics. A statistical equilibrium, in contrast to a deterministic equilibrium that is normally employed in economic models, is ceaselessly turbulent and changing, yet the distribution of properties over the parts of the system is constant. It’s a much better conceptual approach to modelling a system with a huge number of degrees-of-freedom, like an economy.

I think that this kind of approach is the statistical mechanics to information equilibrium's (generalized) thermodynamics/equations of state. Much like how you can compute the pressure and volume relationship of an idea gas from expectation values and partition functions [e.g. here, pdf], information equilibrium gives general forms those equations of state can take.

I curated a "mini-seminar" of blog posts connecting these ideas, in particular this post. I try to make the point that an economic system "... is ceaselessly turbulent and changing, yet the distribution of properties over the parts of the system is constant." (to quote Ian again). That is key to a point that I also try to make: maybe "economics" as we know it only exists when that distribution is constant. When it undergoes changes (e.g. recessions), we might be lost as physicists (usually) are in dealing with non-equilibrium thermodynamics (which for economics might be analogous to sociology).

PS I also tried to look at some information equilibrium relationships in one of Ian's agent-based models (here, here).

Thursday, February 9, 2017

Information equilibrium code repositories


I have set up some code repositories on GitHub for the Mathematica notebooks (and hopefully eventually python and now python):
Dynamic equilibrium [presentation]

https://github.com/infotranecon/dynamicequilibrium

IEtools [python/jupyter notebook implementation]

https://github.com/infotranecon/IEtools

All information equilibrium, all the time

A couple of easy ways to keep up with all my posts:

https://twitter.com/infotranecon

follow us in feedly

The Twitter version comes with some other random nerdy and political stuff. Feedly is just the posts. Also, the the Feedly engine plus the Newsify app works for me (Newsify lets you download and read your RSS feeds offline, which is a great help when you travel a lot like I do).

Since I end up criticizing both mainstream econ and heterodox econ, I end up having to do a lot of the dissemination myself :)

Thanks for sharing!

Update: Well, it was supposed to be easy. I think I fixed the issues with the Follow buttons (gave up on the Twitter one, which seems to have broken my old button as well ...)

Wednesday, February 8, 2017

Qualitative economics done right, part 2

Something that I've never understood in my many years in the econoblogosphere as reader and eventually as a writer is the allure of Steve Keen. One of the first comments on my blog was from someone saying I should check him out. He wrote a book called Debunking Economics back in 2001, claims to have predicted the financial crisis (or maybe others claim that feather for him), and since then he's been a prominent figure in a certain sector of left-leaning economic theory. He's been the subject of a few posts I've written (here, here, here, and here). But mostly: I don't get it. Maybe the nonlinear systems of differential equations are shiny objects to some people. It might just be the conclusions he draws about economics (i.e. "debunking" it), debt, and government spending ‒ although the words "conclusions" and "draws" should probably be replaced with "priors" and "states". Hey, I love lefty econ just as much as the next person.

UnlearningEcon suggested that Keen made ex ante qualitative predictions using his models:
Keen (and Godley) used their models to make clear predictions about crisis
This statement along with the accompanying discussion of qualitative models are what inspired this series of posts (Part 1 is here, where I lay out what we mean by qualitative analysis; Part 3 will be about Godley's models).  There are some that say Keen predicted the financial crisis, but there are several things wrong with this.

He only appears to have predicted in 2007 [pdf] that housing prices would fall for Australia. First, this is after the housing collapse already started in the US (2005-2006). The global financial crisis was starting (the first major problem happened 7 Feb 2007, almost exactly 10 years ago, and that pdf is from April). Additionally, housing prices didn't fall in Australia and Keen famously lost a bet. And note that the linked article refers to him as the "merchant of gloom" ‒ Keen had already acquired a reputation for pessimism. Aside from this general pessimism, there do not appear to be any ex ante predictions of the financial crisis.

Ok, but UnlearningEcon said predictions about the crisis. Not necessarily predictions of the crisis. That is to say Keen had developed a framework for understanding the crisis before that crisis happened, and that's what is important.

And with some degree of charity, that can be considered true. Most (all?) of Keen's models appear to have a large role for debt to play, and in them a slowdown in the issuing of debt and credit will lead to a slowdown in the economy (GDP). Before the housing crisis, the US had rising levels of debt. Debt levels (we think) cannot continue to rise indefinitely (relative to GDP), so at some point they must level off or come back down. And the financial crisis was preceded by a slowdown in the US housing market.

The problem with this is that Keen's model defines GDP' as output (Y, i.e. what everyone else calls GDP without the prime) plus debt creation (Y + dD/dt ~ GDP') (see here or here for discussion). Therefore it comes as no surprise that debt has an impact on GDP'. And since debt cannot increase forever (we think), it must level off or fall if it is currently rising. Therefore there must eventually be an impact to GDP'. And due to Okun's law, that means an impact on employment.

That is to say those qualitative predictions of the model (slowdown in debt creation will lead to fall in GDP') are in fact inputs into the model (Y + dD/dt ~ GDP'). I think JW Mason put it well:
Honestly, it sometimes feels as though Steve Keen read a bunch of Minsky and Schumpeter and realized that the pace of credit creation plays a big part in the evolution of GDP. So he decided to theorize that relationship by writing, credit squiggly GDP. And when you try to find out what exactly is meant by squiggly, what you get are speeches about how orthodox economics ignores the role of the banking system.
Basically we have no idea why we decided debt creation is a critical component of GDP' besides some chin stroking and making serious faces about debt. Making serious faces about debt has been a pasttime of humans since the invention of debt. However! Shouldn't we credit Keen with the foresight to add debt to the model regardless of why he did it? Maybe he hadn't quite worked out the details of why that dD/dt term goes in there, but intuition guided him to include it. The inclusion lead to a model that Keen used to make qualitative predictions, therefore we should look at the qualitative properties of the models. I'll drop the prime on GDP' from here on out.

This is where that series of tests I discussed in Part 1 comes into play. Most importantly: you cannot use a qualitative prediction to validate a model as a substitute for (qualitatively) validating it with the existing data. Let's say I have a qualitative model that qualitatively predicts that a big shock to GDP (and therefore employment by Okun's law) is coming because the ratio of debt to GDP is increasing. Add in a Phillips curve so that high unemployment means deflation or disinflation. Now how would that look generically (qualitatively)? It'd look something like this:


Should we accept this model because of its qualitative prediction? Look at the rest of it:


It was a NAIRU model with perfect central bank inflation/employment targeting that added in Keen's debt mechanism (trivially) when debt reached 100% of GDP. But here is Keen's model from "A dynamic monetary multi-sectoral model of production" (2011):


The qualitative prediction is the same, but the qualitative description of the existing data from before the prediction is very different. Here we have two "Theory versus data" graphs:



Actually the simplistic model is a much better description (measured by average error) of the data! But neither really captures the data in any reasonable definition of "captures the data".

The issue with using Keen's models qualitatively is that they fail these qualitative checks against the existing data. Here are a couple more "Theory versus data" graphs (the first of real GDP growth is from the model above, the second is from Keen's Forbes piece "Olivier Blanchard, Equilibrium, Complexity, And The Future Of Macroeconomics" (2016)):



And I even chose the best part of the real GDP data (the most recent 20 years) to match up with Keen's model. But there's a lot more to qualitative analysis than simply looking at the model output next to US data. To be specific, let's focus on the unemployment rate.

Unemployment: a detailed qualitative analysis

First, Keen's graph not only doesn't look much like US data as mentioned:


It doesn't look much like the data from other countries either (here the EU and Japan):



Additionally, Keen's graphs don't share other properties of the unemployment data. Keen's model has strong cyclical components (I mean pure sine waves here), while the real data doesn't. We can see this by comparing the power spectrum (Fourier transforms) [1] (data is dotted, Keen's model is solid):


Another transform (more relevant to the information equilibrium approach) involves a linear transform of the logarithm of the data:


We can see the unemployment data has a strong step-like appearance, which is due to the roughly constant rate of fractional decrease in the unemployment rate after a shock hits [2]. This property is shared with the data from the EU and Japan shown above. Keen's model has the unemployment rate decreasing at a rate that is proportional to the height of the shock. Instead of flat steps, this results in trenches after each shock that decrease in depth as the shock decreases.

We can also observe that the frequency of the oscillation in Keen's model is roughly constant (it slightly decreases over time). However the differences between the unemployment peaks in the empirical data are consistent with a random process (e.g. Poisson process).

There's a big point I'd like to make here as well: just because you see the data as cyclic it doesn't mean any model result that is cyclic qualitatively captures the behavior of the system. I see this a lot out there in econoblogosphere. It's a bit like saying any any increasing function is same as any other: exponential, quadratic, linear. There are properties of cycles (or even quasi-periodic random events) beyond just the fact that they repeat. 

Anyway, these are several qualitative properties of the unemployment data. In most situations these kinds of qualitative properties derive from properties the underlying mechanism. This means that if you aren't reproducing the qualitative properties, you aren't building understanding of the underlying system.

In fact, exponentially distributed random amplitude Gaussian shocks coupled with a constant fractional decrease in the unemployment rate derived from, say, a matching function yields all of these qualitative features of the data. Here's another "Theory versus data" using this model [3]:  


The underlying system behind this qualitative understanding of the unemployment data has recessions as fundamentally random, predictable only in the sense that the exponential distribution has an average time between shocks (about 8 years). These random shocks hit, and then at a constant fractional rate the newly unemployed are paired with job openings.

Now it is true that the shocks themselves might originate from shocks in the financial sector or housing prices, or fluctuations in wealth and income (or all three). And there may be a debt mechanism behind how the shocks impact the broader economy. However, Keen's model does not even qualitatively describe how these pieces fit together.

So how does Keen's model stack up against the heuristics I put forward in my post on how to do qualitative analysis right? Well ...
  • Keen's models generally have many, many parameters (I stopped counting after 20 in the model discussed above). The model discussed above The Lorenz limit cycle version from the Forbes piece appears to have 10. [4]
  • If real RGDP is as above, then Keen's model does have a log-linear limit for RGDP growth. However, the price level fails to have a log-linear limit (since the rate goes from on average positive to increasingly negative, the price level will go up log-linearly and then fall precipitously).
  • As shown, there is a time scale on the order of 60 years controlling the progression from cyclic unemployment to instability in addition to the roughly 7-8 year cyclic piece. This makes it pure speculation given the data (always be wary of time scales on the order of the length of available data, too).
  • Keen's model is not qualitatively consistent with the shape of the fluctuations (per the discussion above).
  • Keen's model is not qualitatively consistent with the full time series (per the discussion above).

The overall judgment is qualitative model fail.

...

Update 10 February 2017

One of the curious push-backs I've gotten in comments below is that Keen's model "is just theoretical musings", or that I am somehow against new ideas. The key point to understand is that I am against the idea of saying a model has anything to do with a qualitative understanding of the real world when the model doesn't even qualitatively look like the real world. 

Keen himself thinks his model is more than just theoretical musings. He doesn't think the model just demonstrates a principle. He doesn't think these are just new ideas that people might want to consider. He thinks it is the first step towards the correct theory of macroeconomics. Here's the conclusion from Keen's "A dynamic monetary multi-sectoral model of production" (2011):
Though this preliminary model has many shortcomings, the fact that it works at all [ed. it does not] shows that it is possible to model the dynamic process by which prices and outputs are set in a multisectoral economy [ed. we don't learn this because the model fails to comport with data]. ... The real world is complex and the real economy is monetary, and complex monetary models are needed to do it justice [ed. we don't know monetary models are the true macro theory]. ... Given the complexity of this model and the sensitivity of complex systems to initial conditions, it is rather remarkable that an obvious limit cycle developed [ed. limit cycles are not empirically observed] out of an arbitrary set of parameter values and initial conditions—with most (but by no means all) variables in the system keeping within realistic bounds [ed. they do not]. ... For economics to escape the trap of static equilibrium thinking [ed. we don't know if this the the right approach], we need an alternative foundation methodology that is neat, plausible, and—at least to a first approximation—right [ed. it is not]. I offer this model and the tools used to construct it as a first step towards such a neat, plausible and generally correct approach to macroeconomics [ed. it is not because it is not consistent with the data].
Keen's model does not "work", it does not capture the real world even qualitatively, it is not "right", and there is no reason to see this as a first step towards a broader understanding of macroeconomics because it is totally inconsistent with even a qualitative picture of the data.

In my view, Keen's nonlinear differential equations are travelling the exact same road as "rational expectations" approach of Lucas, Sargent, and Prescott. They both ignore the empirical data, but are pushed because they fit with some gut feeling about how economies "should" behave. In Keen's case, per JW Mason's quote above, where "credit squiggly GDP". With LSP [5], that people are perfect rational optimizers and markets are ideal. Real world data is seen through the lens of confirmation bias even when it looks nothing like the models. This approach is not science.

...

Footnotes:

[1] Interestingly, the power spectrum (Fourier transform) of the unemployment rate looks like pink noise with exponent very close to 1. Pink noise arises in a variety of natural systems.

[2] The constant rate is related to the linear transform piece and the fractional decrease is related to the logarithm piece.

[3] I also made this fun comparison of Keen's model, the data, and an IT dynamic equilibrium model:


[4] The random unemployment rate model produced from the information equilibrium framework has 2 for the mean and standard deviation of the random amplitude of the shocks, 2 for the mean and standard deviation of the random width of the shocks, 1 for the Poisson process of the timing of the shocks, and finally 1 for the rate of decline of the unemployment rate for a total of 6.

[5] Not Lumpy Space Princess, but Lucas, Sargent, and Prescott.

Tuesday, February 7, 2017

Why not ask a scientist?

The editors of Bloomberg View ask Why Not Make Economics a Science?, and Noah Smith adds a bit more to the discussion on his blog. It's mostly good stuff.  One of the strange things is that when the editors put forward their thesis:
Reviving economics as a science will require economists to act more like scientists [pdf].
the link at the end connects you to Paul Romer's "The Trouble With Macroeconomics". Paul Romer is an economist, and gets a lot of how science is conducted wrong. His concept of a scientific model is mistaken, his analogy with string theory is misguided, his mathiness charge just demonstrates the real problem more clearly, and he's just as unscientific about his approach to explaining a theoretical result as any macroeconomist.

I have written many posts on how non-scientists get science wrong as it should apply to economics (here, here, here, and here, for example). The two biggest ways it goes wrong are an obsession with so-called unrealistic assumptions and an elevation of science to what I call Wikipedia science. Here are the editors making the first mistake:
Their ambition has been to build mathematically elegant and internally consistent models of the economy, even if that requires wholly unrealistic assumptions. Granted, just as maps have to simplify complex terrain, theoretical models must ignore aspects of reality to be any use. But there’s a line between simplification and gross distortion, and modern macroeconomics has crossed it.
This misses the forest for the trees, and misses the point of Milton Friedman's positive economics essay. The assumptions are not the issue. The editors do touch on the issue:
If models are refuted by the observable world, toss them out.
If models are refuted, toss them out. If they are not, then keep looking into them. However refuted models may have valid assumptions, and empirically valid models may have refuted assumptions. This is a principle of how modern theoretical physics proceeds -- it's called effective theory. Physicists have had to come to terms with the fact that it may be impossible to ever understand what is really happening at a fundamental level (due to e.g. lack of new accelerator experimental results) and so treat understanding as tentative, effective. Macroeconomists might have to come to terms with the fact that human behavior is not amenable to tractable mathematical description and may have to work around it (that's a good description of what I do on this blog using information theory to get around human behavior and provide a short cut to understanding complex systems). 

You may think that if the empirical validity of assumptions doesn't necessarily matter, then it doesn't matter if you use realistic or unrealistic assumptions. But this represents another problem illustrated by the editors:
Rely on experiments, data and replication to test theories and understand how people and companies really behave.
Maybe the editors are unaware of the SMD theorem, but it is entirely possible "real behavior" is not relevant to macroeconomics (or at least relevant to tractable macro theories). This restriction is exactly the kind of straitjacket that Mean Squared Errors put so well:
Consider the macroeconomist.  She constructs a rigorously micro-founded model, grounded purely in representative agents solving intertemporal dynamic optimization problems in a context of strict rational expectations.  Then, in a dazzling display of mathematical sophistication, theoretical acuity, and showmanship (some things never change), she derives results and policy implications that are exactly what the IS-LM model has been telling us all along.  Crowd -- such as it is -- goes wild.
Just substitute realistic behavior and empirically valid assumptions for rational expectations and representative agents. Until a rigorous framework is in place, macroeconomists will work their way around those straitjackets as well.

The thing is that you should add assumptions or take them away based on whether the resulting macro theory is empirically valid or not. Adding them or taking them away because you stroke your chin and make a "very serious person" face is not scientific.

It's not clear if the editors make the second mistake of thinking science is Wikipedia science. By "Wikipedia science", I mean the popular perception that science doesn't make mistakes, doesn't go backwards, or has everything figured out -- that you could look everything up on Wikipedia. These quotes give me pause:
Far from advancing, the science of economics has been going backwards. ... In just about every branch of science, theoretical research has been crucial to achieving breakthroughs. In macroeconomics, it has held progress back.
Sometimes going backwards is what needs to happen. And the second piece is not true. Theoretical paradigms in physics (ones based on making a "very serious person" face and stroking your chin) have definitely hindered progress. Aether and objections to the randomness of quantum mechanics are two that people might be familiar with. Full renormalizability is a more technical one (the rejection of that is what has lead to the embrace of all theories as effective theories). If theory doesn't make progress, that is ok. It's true that lack of progress is a heuristic that hints something might be going wrong, but it doesn't tell you what. If your car doesn't start, that just tells you something is wrong; it doesn't tell you what. You might be out of gas. Your alternator might be shot. In macroeconomics, maybe all this insistence on including human behavior is the problem (also here).

*  *  *

So, why not ask a scientist about if and how macro fails to be scientific? This scientist has put together a list (of both valid and invalid complaints, in my opinion). A short summary:

The identification problem (& complexity)

The basic point is more than one set of parameter values can result in the same macro observations. In science, we'd try to determine the values from the micro theory (which is what economists do [at least they say they do ...]), but also reduce the number of parameters by reducing the complexity of the theory. This is part of general problem that macroeconomic theories are way too complex given the limited empirical data.

Economics does not appear to treat limits properly & Economics does not deal with domains of validity (scope)

Is a model valid in a particular situation? Can you apply rational expectations when the system is far from a general equilibrium? What time period is considered a long time versus a short time? These are questions that not only aren't addressed, but often fail to be even asked.

Economics accepts stories too easily

This included making a "very serious person" face and stroking your chin. A whole lot of macro seems to proceed by narrative. Those unrealistic assumptions? Every one has a story behind it. They keep being used because of the power of the story.

Adding realistic human behavior? There's a story behind it involving a lot of psychology experiments and irrational decision making. I can assure you there isn't a macroeconomic empirical success behind every human behavior assumption (mostly because there aren't a lot of empirical successes).

... Note that the stories and lack of scope conditions are a toxic combination that mean you have no idea where to stop telling stories and adding assumptions. Just keep adding things you can tell a story about until you get some agreement with the data! Scope conditions help because they tell you whether a story is relevant.