Sunday, December 21, 2014

A strange phrenology of money

A section of something cool I saw at the Tate Modern yesterday that is mildly relevant. Details to follow when I have a real computer. [Here is a link; it's part of a work by Alexander Brodsky and Ilya Utkin.]

Gah! I always seem to miss the big macro debate flare ups when I'm on vacation.

So Paul Krugman called out monetarism and Scott Sumner and Nick Rowe responded (among others). I will add some links when I have access to a real computer and am not writing on my iPhone waiting to get on a bus to go Stonehenge for the winter solstice.

Summer touts his model getting e.g. the US and Australia right. Japan and Switzerland, too.

But why can't the BoJ get inflation, whereas Australia can reach NGDP targets? No, really. Why? The central banks are just organizations of humans ... If I took the BoJ and put it in charge of Australia and vice versa, would I get different results? Why? That actually implies that the specific humans in charge matter! Literally it matters that Greenspan or whoever is in charge -- the math is irrelevant, and it becomes a question similar to whether a president or general was a 'good leader'. Macroeconomics becomes a strange phrenology of money with lots of detailed equations and models that all purport to divine that ineffable quality of a "great man" (or woman) that could get the inflation target he or she wanted at any central bank.

And Nick Rowe's word games posit a central bank that "does nothing" that is massively deflationary. I don't think anyone has said that a central bank can't achieve deflation. But what really is problematic is that the definition of doing nothing is irrelevant to Japan. M(t) and m(t) both define the same path of the monetary base, both of which are increasing, and neither of which seem to affect the price level. It really doesn't matter how you word it.

However, the information transfer model shows how a Japan or an Australia can exist simultaneously in the same framework. If they traded central banks, they'd get largely the same results as they're getting now [i.e. Japan would be in a liquidity trap and Australia wouldn't]. And it shows how 'doing nothing' -- keeping a constantly increasing monetary base -- eventually leads to impotent monetary policy. And it shows how major increases in the base can have zero impact on inflation.

And the model isn't even that complicated.

[Updated 1/7/2015 with links and some bracketed comments.]

Saturday, December 13, 2014

Echoes of the financial crisis



I'm procrastinating packing for my trip.


But Scott Sumner mentioned the Big Bang today, which as a physicist, gives me license to opine about the subject at hand. He says:
Instead, the recent deflation [2014] is a distinct echo of the actual NGDP “deflation” ... that occurred in the early part of this decade [2010-2011].

At the top of this post is the (spatial) power spectrum of the fluctuations in the cosmic microwave background (CMB) radiation. This (spatial) ringing (echo) is similar to the ringing you'll sometimes see in a jpeg image that derives from e.g. "overshooting".

Sumner's story is that earlier changes in NGDP are showing up now in inflation after being obscured by commodity prices. I'd like to put forward an alternative story using the information transfer model. The current deflation is mostly part of a long run trend, but there are fluctuations that are echoes of the real big bang: the 2008 financial crisis.

First, let's separate out the contributions to the price level from NGDP terms and monetary base terms (we can do this in the information transfer model):


There is a faint hint of a ringing from the late 2008 financial crisis. I did a fit to a simple ~ exp(t) sin(t) model for these two components:


This actually works really well. However you'd only be able to see it if you can separate out these two components because the fluctuations are almost too small to pull out in the sum relative to the magnitude of the noise. 

This model puts the source of the ringing at the financial crisis -- the commodity booms of the early part of the decade likely follow from it (basically a rebound from the low) as well as the recent deflation (which is on top of a long run trend towards deflation).

It's still not a perfect model, but it's an interesting take. Here's the graph of all the pieces together:




Friday, December 12, 2014

An information transfer traffic model


David Glasner is getting into complexity theory with his two recent posts. In the post from today he talks about traffic as a non-equilibrium complex system and quotes from a paper by Brian Arthur. In an effort to win Glasner over to an information theory view of economics, I'd like to show that a traffic system can be understood in a first order analysis with the information transfer model (or information equilibrium model). The power of the framework is demonstrated by the fact that I put this entire post together in a little over an hour on my lunch break.

Let me quote from the abstract of the paper by Fielitz and Borchardt [1] that formulates the information transfer model -- non-equilibrium complex systems is exactly what the model was designed to work with:
Information theory provides shortcuts which allow one to deal with complex systems. The basic idea one uses for this purpose is the maximum entropy principle developed by Jaynes. However, an extension of this maximum entropy principle to systems far from thermodynamic equilibrium or even to non-physical systems is problematic because it requires an adequate choice of constraints. In this paper we discuss a general concept of natural information equilibrium which does not require any choice of adequate constraints. It is, therefore, directly applicable to systems far from thermodynamic equilibrium and to non-physical systems/processes (e.g. biological processes and economical processes).
[Fielitz and Borchardt added the "economical" after learning of my blog and we periodically discuss how to properly interpret the model for economics. They have been a valuable resource in my research on this subject.]


We will set up the model as a set of distance slots (X) transferring information to a set of time (T) slots -- these become our process variables (see the preceding diagram). [Another way, X is in information equilibrium with T.] The other key ingredient is the "detector", a differential relationship between the process variables that detects information transfer [or changes in equilibrium], which we identify as the velocity:

$$
V = \frac{dX}{dT}
$$

If we assume that the information transfer is ideal so that the information in the distribution of the occupation over the distance slots is equal to the information in the distribution over the time slots, i.e. $I(T) = I(X)$

We can use the theory [1] to state:

$$
\text{(1) } V = \frac{dX}{dT} = \frac{1}{\kappa} \frac{X}{T}
$$

where $\kappa$ is constant (if the slots in the diagram above don't change) that can be worked out from theory, but can also be taken from empirical observations. It's called the information transfer index. This equation represents an abstract diffusion process [1] and we have

$$
(X)^{2} \sim (T)^{2 \kappa}
$$

And for $\kappa = 1/2$, you recover Fick's law of diffusion. However other relationships are allowed (sub- and super-diffusion) for different values of $\kappa$. It accounts for e.g. the number of lanes or the speed limits [or number of vehicles]. This is the equilibrium model of Arthur:
A typical model would acknowledge that at close separation from cars in front, cars lower their speed, and at wide separation they raise it. A given high density of traffic of N cars per mile would imply a certain average separation, and cars would slow or accelerate to a speed that corresponds. Trivially, an equilibrium speed emerges, and if we were restricting solutions to equilibrium that is all we would see.
Additionally, "supply and demand" curves follow from the equation (1) for movements near equilibrium. The "demand curve" is the distance curve and the "supply curve" is the time curve. Some simple relationships follow: an increase in time means a fall in speed at constant distance (increase in supply means a fall in price at constant demand), and an increase in distance results in an increase in speed at constant time. These are not necessarily useful for traffic, but are far more valuable for economics. The parameter $\kappa$ effectively sets the price elasticities.

However, if we look at non-ideal information transfer we have $I(T) \leq I(X)$ and equation (1) becomes

$$
V = \frac{dX}{dT} \leq \frac{1}{\kappa} \frac{X}{T}
$$

In this case the velocity falls below the ideal equilibrium price. Glasner continues his quote of Arthur
But in practice at high density, a non-equilibrium phenomenon occurs. Some car may slow down — its driver may lose concentration or get distracted — and this might cause cars behind to slow down.
Our model produces something akin to Milton Friedman's plucking model where there is a level given by the theory and then the "price" (velocity) falls below that level during recessions (traffic jams):


The key to the slowdown is coordination [2]. When there is no traffic jam, drivers drive at some speed that has e.g. a normal distribution centered near the speed limit -- their speeds are uncoordinated with each other (just coordinated with the speed limit -- the drivers' speeds represent independent samples). For whatever reason (construction, an accident, too many cars on the road), drivers velocities become coordinated -- they slow down together. This coordination can be associated with a loss of entropy [2] as drivers' velocities are no longer normally distributed near the speed limit but become coordinated in a slow crawl.

This isn't a complete model -- it is more like a first order analysis. It allows you to extract trends and can be used to e.g. guide the development of a theory for how coordinations happen based on microfoundations like reaction times and following distances. In a sense, the information transfer model might be the macrofoundations necessary to study the microeconomic model.

For the information transfer model of economics, one just has to change $X$ to NGDP, $T$ to the monetary base, and $V$ to the price level. There's also an application to interest rates and employment. As a special aside, Okun's law drops out of this framework with just a few lines of algebra.

Also, the speed limit can coordinate the distribution of velocities -- much like the central bank can coordinate expectations. I'd also like to note that no matter what the speed limit is, the speed of the traffic may not reach that because there are too many cars. This may be an analogy for e.g. Japan, the US and the EU undershooting their inflation targets.

And finally, there may be two complementary interpretations of this framework for economics. One as demand transferring information to the supply via the market mechanism and another as the future allocation of goods and services transferring information to the present allocation via the market mechanism.

[1] http://arxiv.org/abs/0905.0610
[2] http://informationtransfereconomics.blogspot.com/2014/10/coordination-costs-money-causes.html

Thursday, December 11, 2014

On travel for fun for once

There's going to be a long pause on the blog. I'm going to be on vacation through the beginning of January 2015 -- I'm headed to the UK for a little over three weeks. Mostly in London and then up to Scotland for a bit. I've never been to the UK; if anyone has any travel tips, feel free to leave them in comments. Restaurant recommendations in London (we'll be staying in Pimlico) or Edinburgh are very welcome -- especially kid-friendly places.

When I get back, I'd like to start off the new year with how the predictions I've made with the information transfer model are coming along. I'm also considering switching over from Mathematica to iPython in order to more readily distribute source code to those that are interested.

And don't forget: 2015 is supposed to be the year information theory starts to have an influence on macroeconomics.

Many thanks for reading, and have a happy new year!

Inflation is information theory not ethical philiosophy

Scott Sumner argues that there has been both zero inflation and that 100% of NGDP growth was inflation (both since 1964). Tyler Cowen makes a good point about these arguments changing with income. However, Sumner's main conclusion is that inflation is a pointless concept -- in terms of understanding macroeconomic systems.

I think the real answer here is that economists don't really know what inflation is and they fall back on 19th century concepts like utility to ground them. Cowen and Sumner both ask if you'd rather live in 1964 or 2014 with a given nominal income. If that's what defines inflation -- hedonic adjustments and utility -- then I'd totally agree with Sumner: that's pointless.

But it is in this arena that the information transfer model may provide its most important insight (regardless of whether you picture money as transferring information from demand to supply or from the future to the present). The idea is spread over two posts, with the second being the main result:
  1. What is inflation?
  2. Expectations (rational or otherwise) and information loss
When money (M) is added to an economy, that means more information is being moved around. The difference between how much more information could theoretically be moved around with that money (proportional to M^k) and how much more information is empirically observed to be moved around (proportional to measured NGDP) is inflation.

Inflation has nothing to do with the specific goods and services being sold at a given time. It doesn't matter whether it's an iPhone or a fancy dinner. It doesn't matter whether it's toilet paper or bacon. Inflation is an information theoretical concept, not a philosophical utilitarian one.

Wednesday, December 10, 2014

How money transfers information from the future to the present

Widgets and money flow from their present allocation to their new future allocation while information in the future distribution flows to the present, transferred by the exchange. Widgets and coins (isotype) are from original work by Gerd Arntz.

Continuing along this path where we re-interpret the information transfer model as a picture of information flowing from the immediate future to the present through exchange. The diagram at the top of this post is meant to be a re-interpretation of the diagrams from this post.

Overall, this does not represent a mathematical difference, but rather a conceptual difference. Instead of organizing supply and demand "spatially" we're organizing it "temporally". This way of thinking about the process does help with the question of which way the information flows. There have been a few comments on this blog and elsewhere questioning how we know the information flows from the demand to the supply. In this way of picturing the model, information flows from the future to the present through market exchanges (the market is figuring out the future allocation of goods and money and uncovering that distribution means information is flowing from the future to the present).

This picture also gives us a new way to think about "aggregate demand": it derives from the allocation of goods and services in the future.

The information transfer model (information equilibrium) can also be stated more clearly using this picture. The ITM equates the information flowing from the future to the present though money to the information flowing from the future to the present through the goods. Of course you could always re-arrange this and say that the amount of information sent by the demand is equal to the amount of information received by the supply. However, I think the former description is much clearer.

The total amount of information that flows from the future to the present (in a given period of time) could be measured by e.g. the KL divergence between the future and present allocations. Since our knowledge of the future is imperfect, our expectations of the future allocation represent a loss of information, and this representation gives us a framework to start talking about that more explicitly.

Tuesday, December 9, 2014

Meeting expectations halfway

 

Proceeding in the spirit that theories that are correct have several different formulations, and in conjunction with my fever-dream post from yesterday, I thought I'd start a project where I re-interpret the information transfer model in terms of expectations. Maybe it will lead nowhere. Maybe this is how I should have started. I've tried to make contact with macro theories that contain expectations before, but my general attitude is that "expectations" terms as components in a model are unconstrained and allow you to get any result you'd like.

But maybe that is a strength? If potential expectations are unconstrained, then they could be anything -- and we can simply assume complete ignorance about the expectations that produce a given macrostate (i.e. all expectations microstates consistent with the macrostate defined by NGDP, inflation, interest rates, etc are equally probable).

Let's go back to the original model and instead of calling the information source "the demand" and the destination "the supply" let's set it up as a system where information about an expected future is received by the present via the market mechanism. Our diagram is at the top of this post. From there, everything else on this blog follows through essentially with a re-labeling of demand as "expected demand".

I will do a couple of short posts as I think about the implications of this idea in terms of previous results. If this concept triggers any flashes of insight from anyone out there, let me know in comments. My initial feeling is that these expectations are unlike anything economists currently think of as expectations. A central bank still cannot target an inflation rate over the long run. In the normal formulation, if the central bank sets a target, expectations should anchor on that target. There is no reason for these expectations in the theory in the diagram to anchor on some other number. But maybe the undershooting in inflation is a sign that some information is being lost?

I don't know the answers and maybe this will lead nowhere, but I thought this is a more coherent description of what I was going for in my previous post.

Monday, December 8, 2014

What does $E_{t} \pi_{t+1}$ mean?



Figure from NY Fed [pdf] modified using excerpts from Lena Nyadbi's dayiwul lirlmim barramundi.

Sorry for the $\LaTeX$ in the title but I'm currently trying to decipher some DSGE models and there is a lack of a clear answer in various economics papers as to what e.g. the term

$$
E_{t} \pi_{t+1}
$$

means. The best answer comes from here [pdf]:
The expectations are typically modeled as being formed according to the rational expectations hypothesis. The notation $E_{t}$ in the model denotes model-consistent rational expectations, i.e., the mathematical conditional expectation based on time-t information set and derived from the model ... itself.
So if we assume rational expectations we basically come to the conclusion:

$$
E_{t} \pi_{t+1} = \pi_{t+1}
$$

Are economists embarrassed to have terms from the future in their equations so they invent a new notation that says "oh, no, this term isn't really future inflation in our model that's trying to predict the future"? Now that's a bit uncharitable and this is really (where the vertical bar means evaluated with/at)

$$
E_{t} \pi_{t+1} = \pi_{t+1} \vert_{\sigma^{i}_{t+1} = 0}
$$

where $\sigma^{i}_{t+1}$ is whatever random process(es) or shocks that generate the fluctuations (in the original RBC model this is TFP). I personally like that notation because it makes the rational expectations assumption $\sigma^{i}_{t+1} = 0$ explicit. The best notation would probably be (in fact, has been used in the past)

$$
\pi^{E}_{t+1}
$$

where the $E_{t}$ represents just a label for whatever model of expectations you are using -- as there are different models of expectations besides rational expectations. For example, as best I can tell, the model independent expectations that happen in e.g. market monetarism where the central bank, if credible, can target any price level path (see e.g. Nick Rowe here) it chooses look like this

$$
E_{t} \pi_{t+1} = \Pi (t+1)
$$

where $\Pi (t)$ is a price level path function that sets the price level for each time period $t$. You could get fancy here and have the expectations follow a smooth path over more than a single time period to get back on track (mechanisms like sticky prices could slow adjustment from any given shock).

One issue with this model is that it sometimes seems to be used to go the wrong way. David Beckworth has said (paraphrasing) "Yes, the central bank may say that the price level has path $\Pi_{1} (t)$, but really it has path $\Pi_{2} (t)$" where empirical values $\pi_{t}$ are used to infer $\Pi_{2} (t)$. The specific issue with this is that a path $\Pi_{2} (t)$ that matches up with empirical data likely exists for pretty much any model. The inferred path $\Pi_{2} (t)$ becomes a universal fudge factor that instead of explaining the empirical data decouples your theory from the empirical data.

Neither of these views escape the fact that a potentially rapidly changing (and possibly counterfactual) future [1] is directly coupled to an otherwise concrete theory. Now this isn't necessarily incorrect per se -- maybe things really are that uncertain and subjective. Who knows? But it gets weirder. Let's hand the floor over to Scott Sumner:
At first glance my hypothesis [that the (future) interest rate increase of 2015 caused the Great Recession] seems absurd for many reasons, primarily because effect is not suppose to precede cause. So let me change the wording slightly, and suggest that expectations of this 2015 rate increase caused the Great Recession. Still seems like a long shot, but it's actually far more plausible than you imagine (and indeed is consistent with mainstream macro theory.)
Yes, this is totally consistent with mainstream macro -- the two equations above show how information (and changing information) about the future can potentially propagate backwards into the past. All you need to do is couple a $t+1$ term to a $t$ term. That may be why economists sheepishly write those $E_{t}$ terms.

But I have an additional critique beyond the basic structure of expectations-based macroeconomic theories that might seem a bit strange and it's related to the causality problems with Sumner's hypothesis. What keeps expectations in the future? What prevents expectations of $\pi_{t+1}$ from becoming the value of $\pi_{t}$? What keeps the dreamtime from becoming all time? [2]

In a sense, expectations can travel from a point the future to the present instantaneously -- a kind of temporal 'action at a distance' [3]. There are two things that appear to stop this in macro models:
  1. Sticky prices/wages: the empirically observed fact that some prices are slow to change
  2. Uncertainty: because the future is uncertain, the full impact of expectations is reduced
Interestingly, both of these ideas have interpretations in terms of entropic forces. Prices are sticky because the price distribution cannot spontaneously organize itself (destroy uncertainty information about the microstates) in a way that allows prices to change and the very definition of an entropic force is a force that serves to preserve or restore uncertainty about the microstate (increase entropy).

Can we therefore interpret the information transfer model as a model of economic dreamtime? In the information transfer model, we assume all possible microstates consistent with current macro observations are not only possible, but equally probable. That includes all possible future paths of the price level. Is modeling a macroeconomy with expectations no different than assuming you know nothing about how macroeconomics works?

With expectations, we decouple the the bulk of the DSGE model (the concrete theory) from the data. Maybe that is a good thing, and instead of the various boxes in the diagram at the top of this post and the blob of economic dreamtime expectations that includes all possible futures and the empirical data all we need is the blob (which we model by assuming ignorance of what it is) and the empirical data. The information transfer model is the model of expectations that instantly propagates expectations of the future to the present and completely dominates the concrete theory.

The spice is the worm. The worm is the spice.

[1] I like to imagine it visually as aboriginal dreamtime paintings.
[2] I think the dreamtime metaphor is especially useful here because dreamtime represents the past and future holistically.
[3] It's funny that action at a distance is associated with a violation of special relativity (the constant speed of light) which is in turn associated with causality violation.

Saturday, December 6, 2014

Information equilibrium theories satisfy the Lucas critique

Image from wikimedia commons and modified by me with a random Voronoi diagram to sort of suggest Planck areas measuring the surface units of information.

Despite being annoyed by me, Noah Smith is probably one of the best blogging resources for understanding modern macroeconomics around today, and I frequently go back through his archives and click through the many self-referential links on his blog (a method I have apparently copied). This may be an educational style bias as Noah was an undergraduate physics major, giving us a common point of reference. And there is, of course, the same excellent last name.


Anyway, I was just reading this post of his on the Lucas critique -- the idea that observed macroeconomic relationships may change if you try to use them for policy (the prime example of which is the Phillips Curve which seemed to go away when it was used for policy). Noah quotes Charles Plosser:
Plosser says that "almost no model [has] satisfactorily dealt with the [Lucas Critique]."

Noah then proceeds to ask how one would satisfy the Lucas critique and identifies three methods:

  • Macro experiments (in real or virtual worlds)
  • Real microfoundations and detailed simulations (agent-based models)
  • Vague judgment calls (which is Noah's observation of how things seem to work)

This leaves out a fourth possibility

  • Assume as little as possible about the microfoundations and see how far that can take you

This fourth possibility is the the approach taken in the 19th century to understand thermodynamics (we knew little about atoms at the time), but is also behind maximum entropy methods and -- particularly relevant to this blog -- information equilibrium (such as the information transfer model). Note that I've talked about this before in relation to the Lucas critique.

I'm not sure I've made this clear enough on this blog, but in a sense, whatever theory of macroeconomics turns out to be correct, the information transfer model has to be correct. Note that I've talked about this before as well.

The information transfer model may not be useful, but it has to be correct. The information on one side of supply and demand has to be greater than or equal to the information on the other: I1 ≤ I2. The cases where it is not useful would be where I1 << I2 (most of the information disappears into the information theory equivalent of 'heat') or where I1 ~ I2 doesn't contain enough ... well, enough information to get the details right (microfoundations could matter a lot or the fluctuations around I1 ~ I2 may be the most important part).

In a sense, information equilibrium theories like the one presented on this blog are analogous to assuming isentropic processes in thermodynamics. These will fail if the process produce a lot of entropy or there is more going on that involves additional physics, like say superconductivity. Sometimes assuming energy conservation is all you need to know, but sometimes it's not enough by itself or energy isn't conserved (in a useful way for the problem).

Another useful analogy from physics is black hole thermodynamics.

Stephen Hawking and Jacob Bekenstein, in trying to understand black holes in terms of quantum mechanics, posited that black holes must obey thermodynamics and were able to derive some important relationships that have shed light on how general relativity (Einstein's theory of gravity over long distances) and quantum mechanics (the fundamental theory of small distances) fit together. One really interesting result is that black holes seem to have extremely large entropy meaning they have a huge number of degrees of freedom (entropy is proportional to the surface area of the black hole in Planck units) -- and that you can use string theory to derive that entropy.

Now thermodynamics didn't have to be useful to describe black holes, but it did have to be correct. This is the same for information equilibrium and macroeconomics. In fact, one can construct an elaborate analogy:

Quantum mechanics :: microfoundations
General relativity :: macroeconomics
General relativity may be wrong about black holes :: Lucas critique
General relativity ignores quantum mechanics :: Lucas critique
String theory (quantum theory of gravity) :: microfounded theory of macro
Black hole thermodynamics :: information equilibrium theories

Hawking and Bekenstein's work on black hole thermodynamics would be correct regardless of the final form of the quantum theory of gravity. In the same way, information transfer economics would be correct regardless of the final form or macroeconomics and so satisfies the Lucas critique.

Thursday, December 4, 2014

The information transfer Solow growth model is remarkably accurate


I wanted to jump in (uninvited) on this conversation (e.g. here, here) about total factor productivity (TFP, aka phlogiston) using some previous work I've done with the information transfer model and the Solow growth model.

In using the Solow growth model, economists assume the Cobb-Douglas form

$$
Y = A K^{\alpha} L^{\beta}
$$

as well as assume $\alpha + \beta = 1$ to enforce constant returns to scale and then assign the remaining variation to $A$, the Solow residual aka TFP.

The conversation linked at the top of this post is then about why TFP seems to have slowed down (e.g. the Great Stagnation or whatever your model is). This is all a bit funny to me because it is effectively asking why the fudge factor is going away (if TFP is constant then $A$ is just a constant in the formula above).

Well, my intended contribution was to say "Hey, what if $\alpha$ and $\beta$ are changing?". In the information transfer model, the Solow growth model follows from a little bit of algebra and gives us

$$
Y = A K^{1/\kappa_{1}} L^{1/\kappa_{2}}
$$

where $\kappa_{i}$ are the information transfer indices in the markets $p_{1} : Y \rightarrow K$ and $p_{2} : Y \rightarrow L$.

The first step in looking at changing $\kappa$ is to look at constant $\kappa$ (and constant $A$). That threw me off my original intent because, well ... because the information transfer Solow growth model with constant TFP and constant $\kappa_{i}$ is a perfect fit:


It's so good, I had to make sure I wasn't implicitly using GDP data to fit GDP data. Even the derivative (NGDP growth) is basically a perfect model (for economics):


Of course, in the information transfer model $\kappa_{1}$ and $\kappa_{2}$ have no a priori relationship to each other and in fact we have

$$
\frac{1}{\kappa_{1}} + \frac{1}{\kappa_{2}} = 1.25
$$

or individually $\kappa_{1} = 1.18$ and $\kappa_{2} = 2.50$. So there aren't "constant returns to scale".

In the information transfer model, this is not a big worry. The two numbers represent the relative information entropy in the widgets represented by each input (dollars of capital and number of employees, respectively) relative to the widgets represented in the output (dollars of NGDP) -- why should those things add up to one in any combination? That is to say the values of $\kappa$ above simply say there are fewer indistinguishable types of jobs than there are indistinguishable types of capital investments so adding a dollar of capital adds a lot more entropy (unknown ways in which it could be allocated) than adding an employee. A dollar of capital is interchangeable for a lot of different things (computers, airplanes, paper) whereas a teacher or an engineer tend to go into teaching and engineering slots**. Adding the former adds more entropy, and entropy means economic growth.

PS Added 12/5/2014 12pm PST:  The results above use nominal capital and nominal GDP rather than the usual real capital and real output (RGDP). The results with 'real' (not nominal) values don't work as well. I am becoming increasingly convinced that "real" quantities may not be very meaningful.

** This is highly speculative, but it lends itself to a strange interpretation of salaries. Economists may make more money than sociologists because they are interchangeable among a larger class of jobs; CEO's may make the largest amount of money because they are interchangeable among e.g. every level of management and probably most of the entry level positions. A less specific job description (higher information entropy in filling that job) corresponds with a bigger contribution to NGDP and hence a higher salary.

Wednesday, December 3, 2014

An information transfer DSGE model


For fun (and bowing to the inherent superiority of economics over other fields), here's the information transfer (macro) model as a fancy log-linearized DSGE/RBC model with lots of neat $\LaTeX$ symbols:

$$
\text{(1) } n_{t} =  \sigma_{t} + \left( \frac{1}{\kappa} - 1 \right) (m_{t} - m_{t-1}) + n_{t-1}
$$
$$
\text{(2) } \pi_{t} = \left( \frac{1}{\kappa} - 1 \right) (m_{t} + m^{*}) + c_{m}
$$
$$
\text{(3a) } r^{l}_{t} = c_{1} (n_{t} - m_{t} - m^{*}) + c_{2}
$$
$$
\text{(3b) } r^{s}_{t} = c_{1} (n_{t} - b_{t} - b^{*}) + c_{2}
$$
$$
\text{(4) } \ell_{t} = n_{t} - \pi_{t} + c_{\ell}
$$
$$
\text{(5) } w_{t} = n_{t} + c_{w}
$$

Here the log-linearized variables are $n$ (nominal output), $m$ (the currency base, M0), $b$ (monetary base, MB), $\pi$ (the price level), $\ell$ (labor supply), $w$ (nominal wages) and $r^{x}$ (interest rates with x = l, s for long- and short-term). The term $\sigma$ represents the 'nominal shocks' (see also here, mathematically no different from the RBC TFP shocks, but could include e.g. changes in government spending). The parameters are the $c_{i}$, $m^{*}$ and $b^{*}$ (the log of the equilibrium/starting values of the monetary base components) along with our old friend $\kappa$, the information transfer index. In a log-linearized model, $\kappa$ is a constant since we're considering small deviations from the variables in the log-linearization.

This representation explicitly shows that there aren't any $E_{t}$ terms (expectations) unlike e.g the RBC model [pdf]. A couple of additional interesting facts pop out if we set $\kappa$ to the 'IS-LM' regime (1.0) or quantity theory of money regime (0.5) (see here for more about the meaning of this). If $\kappa = 1.0$ we have:

$$
\text{(1a) } n_{t} =  \sigma_{t} + n_{t-1}
$$
$$
\text{(2a) } \pi_{t} = c_{m}
$$

where nominal output is unaffected by monetary policy and inflation is constant. If we plug (1a) back into the interest rate formula, we see that monetary expansion will cause interest rates to fall (nominal output is unchanged by monetary expansion). If $\kappa = 0.5$ we have:

$$
\text{(1b) } n_{t} = \sigma_{t} + m_{t} - m_{t-1} + n_{t-1}
$$
$$
\text{(2b) } \pi_{t} = m_{t} - m^{*} + c_{m}
$$

which says the price level grows with the monetary base (the quantity theory of money). If we plug (1b) back into the interest rate formula, we see that monetary expansion will generally cause interest rates to rise (nominal output will increase because nominal growth will exceed monetary expansion). Explicitly:

$$
r_{t} = c_{1} (\sigma_{t} + n_{t-1} - m_{t-1} - m^{*}) + c_{2} = c_{1} \sigma_{t} + r_{t-1} + c_{2}
$$

where the nominal shocks $\sigma$ are observed to be generally positive (think growth in TFP or population) along with $c_{2}$ (which is $\sim \log NGDP/M0$).

Anyway, this is just the baseline model -- I'll put in some equilibrium conditions and policy targets and see if economics really is superior to other fields of inquiry. I mean this kind of thing was Nobel prize level work in the 1980s.

PS Where's the information theory in this, you ask? It's in the parameter $\kappa$ for starters which defines the relative information content of nominal output relative to the supply of money. Equations (1), (4) and (5) represent information equilibrium conditions while equations (2) and (3) represent measures of the rates of change of information transfer.

Monday, December 1, 2014

Japan's new recession

Scott Sumner has up a series of posts (here, here, here, here and here) that question the meaningfulness of the word 'recession' if, as the media (and OECD) says, Japan has entered a recession. In one of those posts, Sumner has two definitions of recession that show economic analysts use a variety of indicators to determine whether a recession has happened. Sumner advocates a view that stresses unemployment.

Changes in unemployment do seem to be a strong recession indicator in the US, but some of these are a bit weird if you think about them. For example the 1990s recession ends at an unemployment level that the 2000s recession reaches at its peak. That makes me think that absolute unemployment is not a reliable indicator.

In the information transfer model, unemployment appears to be the result of a spontaneous coordination that breaks the market mechanism. The rapid climbs in unemployment appear to be mass panic and require some sort of human behavior model that can't be explained by macroeconomic variables. The information transfer model, however, does appear to have a pretty solid definition of recession that even has an analogy in terms of avalanches. I look at what I called "nominal shocks" -- they are the difference between where NGDP is and where it 'should be' based on the change in the price level alone. The method is described here and I discuss the usefulness of the indicator in the case of low inflation here.

Japan has been experiencing outright deflation over the past couple decades ... which could make you mis-interpret falling NGDP (or very slow growth) as a recession. Using the method above, we can figure out that Japan has been experiencing an average 'nominal shock' of +0.96% since 1994. The economy has been 'growing' despite deflation [corrected]. Here is a graph of these nominal shocks with OECD recessions indicated in red:


If you split up the second to last recession, you get evidence for the LA Times claim of four recessions since 2008 that Sumner ridicules in one of the linked posts above.

The recessions are all fairly obvious sustained negative deviations (or pairs of deviations) except the one that happens in 2004. In any case, there is a pretty solid case for 2014 to be a recession. If the ITM ruled economics, I'd throw out the 2004 recession and possibly split up the recession in 2011-2012:

The other thing to note is that the two VAT increases (vertical lines in the previous two graphs) precede large negative shocks and associated recessions.

...

PS I did the fit to the Japanese price level after subtracting out the two big VAT increases that show up in the data and it looks pretty good: