Sunday, April 30, 2017

Can we see a Phillips curve?

The new core PCE inflation number for March comes out May 1st. In preparation for that, I was looking at the dynamic equilibrium model for PCE inflation and adding more shocks to see how well the data could fit. In the process, I noticed something odd/interesting:


These are all positive shocks to PCE inflation, but notice anything about the dates? Let me add NBER recessions on this picture:


Each recession is associated with a positive shock to PCE inflation that precedes it. The only exceptions are the early 2000s recession (for which there is a debate on whether or not it is a recession) and the early 1960s recession (where there isn't data). Actually, it is not entirely out of the question to add one for the early 2000s [1]. Since these shocks precede the recessions, they'll precede the shocks to unemployment (adding the dynamic equilibrium model of unemployment from e.g. here)


This reproduces a "Phillips curve"-like behavior. Inflation rises when unemployment has been falling for awhile after an unemployment shock. Just after a positive inflation shock, we get a shock to unemployment. Therefore inflation will tend to fall (since the shock is over) while unemployment is rising. These fluctuations are likely happening on top of the demographic transition of the 1960s and 70s.

If we are headed into another recession (per here), this might explain the higher inflation of the past year or so (core PCE inflation was over 3% in Jan of 2016 and 2017, having not been above 3% since 2012):


This is interesting as it means rising inflation is a sign of an upcoming recession (the center of the inflation shock precedes the center of the unemployment shock by about 1.3 years on average). However, this could be a just-so story. Inflation rises because unemployment gets low. But as recessions are random with roughly a mean time between them of about 8 years, it just appears we get recessions after unemployment has been falling for awhile (and we get a rise in inflation).

Update 1 May 2017

I had forgotten about the low CPI number earlier in April which should have prepared us for the very low March 2017 number: -1.7% (continuously compounded annual rate of change).

Footnotes:

[1] I don't necessarily think it's useful, but here it is:

Saturday, April 29, 2017

High dimensional budget constraints and economic growth


This is something of a partial idea, a work in progress. Let's say there is some factor of production $M$ allocated across $p$ different firms. The $p$-volume bounded by this budget constraint is:

$$
V = \frac{M^{p}}{p!}
$$

p-volume bounded by budget constraint M

Let's say total output $N$ is proportional to the volume $V$. Take the logarithm of the volume expression 

$$
\log V = p \log M - \log p!
$$

and use Stirling's approximation for a large number of firms:

$$
\log V = p \log M - p \log p + p
$$

If we assume $V \sim e^{\nu t}$ and $M \sim e^{\mu t}$ and taking the (logarithmic) derivative (continuously compounded rate of change) and re-arranging a bit:

$$
\nu = \left(  p+ \left( t - \frac{\log p}{\mu} \right) \frac{dp}{dt}   \right) \mu
$$

Now let's take $p \sim e^{\pi t}$ and re-arrange a bit more:

$$
\text{(1) }\; \nu = p \left( 1 + \left(1 - \frac{\pi}{\mu} \right) \pi t   \right) \mu
$$

In the information equilibrium model, for exponential growing functions with growth rates $a$ and $b$ we have the relationship (see e.g. here)

$$
a = k b
$$

where $k$ is the information transfer index. So in equation (1) we can identify the IT index

$$
k \equiv p \left( 1 + \left(1 - \frac{\pi}{\mu} \right) \pi t   \right)
$$

In a sense, we have shown one way how the information equilibrium condition with $k \neq 1$ can manifest itself. For short time scales $\pi t \ll 1$, we can say $p \approx p_{0} (1 + \pi t)$ and:

$$
k \approx p_{0} \left( 1 + \pi t + \left(1 - \frac{\pi}{\mu} \right) \pi t   \right)
$$

This an interesting expression. If $\pi > 2 \mu$ then the IT index falls. That is to say if the rate at which the number of firms increases faster than the factor of production increase, then the IT index falls. Is this the beginning of an explanation for falling growth with regard to secular stagnation? I'm not so sure. 

As I said, this is a work in progress.

Friday, April 28, 2017

Update to the predicted path of NGDP

The new GDP numbers are out today, and RGDP came in a bit low per the dynamic equilibrium. However the NGDP number is basically on track [1] with the prediction started over two years ago (most recently updated here):



I added orange dots and an orange dotted line to show the data available at the time. It looks like we can pretty well reject the "old normal" exponential growth model (gray dashed in both graphs). In the second graph, the model NGDP growth rate (blue line) appears biased high by 0.2 percentage points compared to the linear fit to the data (dotted yellow line).

There are still potential revisions (see the difference between the orange dotted and yellow curves), so 26 May 2017 we'll get the second estimate.

...

Footnotes:

[1] Meaning the deflator was high, which it was at 2.2%.

Thursday, April 27, 2017

What will the GDP number be tomorrow?

Menzie Chinn shows us the various estimates from GDPnow (Atlanta Fed), e-forecasting, and Macroeconomic Advisers

I thought I'd put a prediction out there using this model (which estimates RGDP per capita [prime age], so this includes an extrapolation from the model plus an estimate of the prime age population growth with the errors propagated from each but nearly all of the error is in the RGDP model by an order of magnitude). The result is (SAAR, chained 2009 dollars):

16,933.2 ± 63.9 billion dollars (1σ)

or

0.71 ± 0.38 % growth [1] ... i.e. 0.3% to 1.1%

Chinn tells us the Bloomberg consensus is 1.1%. Macroeconomic Advisers says 0.3%. GDPnow says 0.2%. The dynamic equilibrium model of RGDP per capita basically covers that entire spread. However, the dynamic equilibrium model has only two parameters (since were not in shock). That means that all the parameters of the GDPnow model or MA's model are getting you a just few tenths of a percentage point.

GDPnow seems to take into account the "low first quarter effect"; I wonder if MA does the same?

...

Update 28 April 2017:

The number is here and it is a bit lower than the model shows:

16,842.4
(+ 0.2 %)

which means Quarter/Quarter growth (that I show above) was 0.2% (and annualized is 0.7% which you might have seen in news reports e.g. here).

However, this is the advance estimate and there is a tendency for these to be revised (though it could be "low first quarter effect" mentioned above). So we'll see on 26 May 2017 what happens.

...

Footnotes:

[1] Quarter on quarter SAAR. Based on the not-yet-revised 16,813.3 billion number for Q4 2016.

Wednesday, April 26, 2017

Falsehoods scientists believe about economics

I stumbled upon a fun list of "falsehoods programmers believe about economics", and tweeted that I thought it also seems pretty representative of falsehoods scientists believe about economics. However, after I thought about it for a bit I realized that there are really two classes of scientists: scientists who believe those falsehoods and those that don't. 

Just kidding.

But really, there are just one of two falsehoods scientists believe about economics:
  1. Economics is a scientific field like any other that speaks in the public sphere using theories that are empirically grounded and responds to changes in the data, or ...
  2. Economics is a field rife with methodological issues and bad models that can be greatly improved by not only the approaches used in the scientist in question's own discipline, but in fact in the scientist in question's own work.
I used to be in category 1, but have since moved into category 2 [1].

...

Footnotes:

[1] That's a bit of a self-deprecating joke. Actually, the version (v2) of Fielitz and Borchardt's paper that I saw (which wasn't really relevant to my work at the time) came up during a search for something entirely different (which was relevant). I recalled the F&B paper (particularly the reference to "non-physical systems") while sitting through a presentation on prediction markets, and bit later Paul Krugman wrote this blog post ("It is not easy to derive supply and demand curves for an individual good from general equilibrium with rational consumers blah blah.") which I took as a challenge and started this blog.

The F&B paper was subsequently updated (v4) to include a reference to this blog.

Tuesday, April 25, 2017

Should the left engage with neoclassical economics?

[Update: A (in my opinion, better) version of this blog post has been reprinted as an article at Evonomics.]



Vox talked with Chris Hayes of MSNBC in one of their podcasts. One of the topics that was discussed was neoclassical economics:
[Vox:] The center-of-right ideas the left ought to engage[?] 
[Hayes:] The entirety of the corpus of Hayek, Friedman, and neoclassical economics. I think it’s an incredibly powerful intellectual tradition and a really important one to understand, these basic frameworks of neoclassical economics, the sort of ideas about market clearing prices, about the functioning of supply and demand, about thinking in marginal terms. 
I think the tradition of economic thinking has been really influential. I think it's actually a thing that people on the left really should do — take the time to understand all of that. There is a tremendous amount of incredible insight into some of the things we're talking about, like non-zero-sum settings, and the way in which human exchange can be generative in this sort of amazing way. Understanding how capitalism works has been really, really important for me, and has been something that I feel like I'm a better thinker and an analyst because of the time and reading I put into a lot of conservative authors on that topic.
I can hear some of you asking: Do I have to?

The answer is: No.

Why? Because you can get the same understanding while also understanding where these ideas fall apart ‒ that is to say understanding the limited scope of neoclassical economics – using information theory.

Prices and Hayek

One thing that I think needs to be more widely understood is that Hayek did have some insight into prices having something to do with information, but got the details wrong. He saw market prices aggregating information; a crop failure, a population boom, speculating on turning rice into ethanol ‒ these events would cause food prices to increase, and that price change represented knowledge about the state of the world being communicated. However, Hayek was writing in a time before communication theory (Hayek's The Use of Knowledge in Society was written in 1945, a few years before Shannon's A Mathematical Theory of Communication in 1948). The issue is evident in my list. The large amount of knowledge about biological or ecological systems, population, and social systems are all condensed into a single number that goes up. Can you imagine the number of variables you'd need to describe crop failures, population booms, and market bubbles? Thousands? Millions? How many variables of information do you get out via the price of rice the market? One.

What we have is a complex multidimensional space of possibilities that is being compressed into a single dimensional space of possibilities (i.e. prices), therefore if the price represents information aggregation, we are losing a great deal of it in the process. As I talk about in more detail here, one way neoclassical economics deals with this is to turn that multidimensional space into a single variable (utility), but that just means we've compressed all that information into something else (e.g. non-transitive or unstable preferences). 

However we can re-think the price mechanism's relationship with information. Stable prices mean a balance of crop failures and crop booms (supply), population declines and population booms (demand), speculation and risk-aversion (demand). The distribution of demand for rice is equal to the distribution of the supply of rice (see the pictures above: the transparent one is the "demand", the blue one is the "supply"). If prices change, the two distributions would have to have been unequal. If they come back to the original stable price ‒ or another stable price ‒ the two distributions must have become equal again. That is to say prices represent information about the differences (or changes) in the distributions. Coming back to a stable means information about the differences in one distribution must have flowed (through a communication channel) to the other distribution. We can call one distribution D and the other S for supply and demand. The price is then a function of changes in D and changes in S, or

p = f(ΔD, ΔS)

Note that we observe that an increase in S that's bigger than an increase in D generally leads to a falling price, while an increase in D that is bigger than the increase in S generally leads to a rising price. That means we can try

p = ΔD/ΔS

for our initial guess. Instead of a price aggregating information, we have a price detecting the flow of information. Constant prices tell us nothing. Price changes tell us information has flowed (or been lost) between one distribution and the other.

This picture also gets rid of the dimensionality problem: the distribution of demand can be as complex and multidimensional (i.e. depend on as many variables) as the distribution of supply.

Marginalism and supply and demand

Marginalism is far older than Friedman or Hayek, going back at least to Jevons and Marshall. In his 1892 thesis, Irving Fisher tried to argue that if you have gallons of one good A and bushels of another good B that were exchanged for each other then the last increment (the margin) was exchanged at the same rate as and B, i.e.

ΔA/ΔB = A/B

calling both sides of the equation the price of B in terms of A. Note that the left side is our price equation above, just in terms of A and B (you could call A the demand for B). In fact, we can get a bit more out of this equation if we say

pₐ = A/B

If you hold A = A₀ constant and change B, the price goes down. For fixed demand, increasing supply causes prices to fall – a demand curve. Likewise if you hold B = B₀ constant and change A, the price goes up – a supply curve. However if we take tiny increments of A and B and use a bit of calculus (ΔA/ΔB →dA/dB) the equation only allows A to be proportional to B. It's quite limited, and Fisher attempts to break out of this by introducing marginal utility. However, thinking in terms of information can again help us.

Matching distributions

If we think of our distribution of A and distribution of B (like the distribution of supply and demand above), each "draw" event from those distributions (like a draw of a card,a flip of a coin, or roll of a die) contains I₁ information (i.e. a flip of a coin contains 1 bit of information) for A and I₂ for B. If the distribution of A and B are in balance ("equilibrium"), each draw event from each distribution (a transaction event) will match in terms of information. Now it might cost two or three gallons of A for each bushel of B, so the numbers of of draws on either side will be different in general but as long as the number of draws is large the total information from those draws will be the same:

n₁ I₁ = n₂ I₂

We'll call I₁/I₂ = k for convenience so that

k n₁ = n₂

Now say the smallest amount of A is ΔA and likewise for B. That means

n₁ = A/ΔA
n₂ = B/ΔB

i.e. the number of gallons of A is the amount of A (i.e. A) divided by 1 gallon of A (i.e. ΔA). Putting this together and re-arranging a bit we have

ΔA/ΔB = k A/B

This is just Fisher's equation again except there's a coefficient in it, making the result a bit more interesting when you use tiny increments (ΔA/ΔB →dA/dB) and use calculus. But there's a more useful bit of understanding you get from this approach that you don't get from neoclassical economics. What we have is information flowing between A and B and we've assumed that information transfer is perfect. But markets aren't perfect, and all we can really say is that the most information that gets from the distribution of A to the distribution of B is all of the information in the distribution of A. Basically

n₁ I₁ ≥ n₂ I₂

Following this through the derivation above, we find

p = ΔA/ΔB ≤ k A/B

The real prices in a real economy will fall below the neoclassical prices. There's also another assumption in that derivation – that the number of transaction events is large. So even if the information transfer was ideal, neoclassical economics only applies in markets that are frequently traded. 

Another insight we get is that supply and demand doesn't always work in the simple way described in Marshall's diagrams. We had to make the assumption that A or B was relatively constant while the other changed. In many real world examples we can't make that assumption. A salient one today is (empirically incorrect) claim that immigration lowers wages. A naive application of supply and demand (increase supply of labor lowers the price of labor) ignores the fact that more people means more people to buy goods and services produced by labor. Thinking in terms of information, it is impossible to say that you've increased the number of labor supply events without increasing the number of labor demand events so A and B must both increase.

Instead of the neoclassical picture of ideal markets and simple supply and demand, we have the picture the left (and to be fair many economists) tries to convey of not only market failures and inefficiency but more complex interactions of supply and demand. However, it is also possible through collective action to mend or mitigate some of these failures. We shouldn't assume that just because a market spontaneously formed or produced a result it is working, and we shouldn't assume that because a price went up either demand went up or supply went down.

The market as an algorithm

The picture above is of a market as an algorithm matching distributions by raising and lowering a price until it reaches a stable price. In fact, this picture is of a specific machine learning algorithm called Generative Adversarial Networks (GAN, described in this Medium article or in the original paper). The idea of the market as an algorithm to solve a problem is not new. For example one of the best blog posts of all time uses linear programming as the algorithm, giving an argument for why planned economies will likely fail, but the same reasons imply we cannot check the optimality of the market allocation of resources (therefore claims of markets as optimal are entirely faith-based). The Medium article uses a good analogy that I will repeat here:


Instead of the complex multidimensional distributions we have paintings. The "supply" B is the forged painting, the demand A is the "real" painting. Instead of the random initial input, we have the complex, irrational, entrepreneurial, animal spirits of people. The detective is the price p. When the detective can't tell the difference between the paintings (i.e. when the price reaches a relatively stable value because the distributions are the same), we've reached our solution (a market equilibrium). 

Note that the problem the GAN algorithm tackles can be represented two-player minimax game from game theory. The thing is that with the wrong settings algorithms fail and you get garbage. I know from experience in my regular job researching machine learning, sparse reconstruction, and signal processing algorithms. So depending on the input data (i.e. human behavior), we shouldn't expect to get good results all of the time. These failures are exactly the failure of information to flow from the real painting to the forger through the detective – the failure for information from the demand to reach the supply via the price mechanism.

An interpretation of neoclassical economics for the left

The understanding of neoclassical economics provided by information theory and machine learning algorithms is better equipped to understand markets. Ideas that were posited as articles of faith or created through incomplete arguments by Hayek and Friedman are not the whole story and leave you with no knowledge of the ways the price mechanism, marginalism, or supply and demand can go wrong. In fact, leaving out the failure modes effectively declares many of the concerns of the left moot by fiat. The potential and actual failures of markets are a major concern of the left, and are frequently part of discussions of inequality and social justice.

The left doesn't need to follow Chris Hayes advice and engage with Hayek, Friedman, and the rest of neoclassical economics. The left instead needs to engage with a real world vision of economics that recognizes its potential failures. Understanding economics in terms of information flow is one way of doing just that.

...

Update 26 April 2017

I must add that the derivation of the information equilibrium condition (i.e. dA/dB = k A/B) is originally from a paper by Peter Fielitz and Guenter Borchardt and applied to physical systems. The paper is always linked in the side bar, but it doesn't appear on mobile devices.

Monday, April 24, 2017

Dynamic equilibrium: rental vacancy rate

Much like how I approached job vacancies (and hires) using the dynamic equilibrium model, I thought I'd take a look at rental vacancies (where you could imagine a structurally similar relationship imagining rental vacancies as "unemployed" housing). It provides a decent description:



The dynamic equilibrium is -0.0052/year (i.e. vacancies fall by about 0.5% per year in the absence of shocks). The shocks are centered at 1959.1 (positive), 1967.4 (negative), 1985.0 (positive), 2001.6 (positive), and 2013.2 (negative). One could imagine the last two as people leaving rentals for houses in the housing boom, and then returning to rentals after the bust.

According to FRED's release calendar, new data comes out on Thursday, before 2017 Q1 GDP data on Friday. This model expects little change; we'll see if that bears out in the data over the next couple years.

Update 27 April 2017

Here's that data point ... not that one point tell us anything; I'll continue to follow this.


...

Update 6 March 2020

I for some reason rarely update this forecast (last time was almost 2 years ago), but here's the latest (with a zoom in, click to enlarge):



This "working paper" is now four years old

... it's more of a "working thesis" now.

Previous birthday celebrations focused on the most popular posts or prediction successes. I thought that for this birthday, we could celebrate some of my favorite posts. In no particular order:

This post provides the best description of the motivation that goes into treating ensembles of markets (which relates to other work on statistical equilibrium) and changing information transfer indices. It is the theoretical basis for two posts below: [3] and [5].
This represents a concise summary of the direction that I've found most interesting recently. Coupled with the basic supply and demand/production function unification [4], the ensemble approach in [1], and the matching theory in [6] it forms the core tools needed to understand a sizable chunk of economics.
An application of [1] to the stock market that explains factor models and their issues.
A really simple way to understand the scope of supply and demand as well as its connection to growth (i.e. production functions).
An application of [1] where the primary factor of production is labor. I originally had looked at the application of the ideas of [1] where the primary factor of production is money (but I didn't put it that way).
This post connects dynamic equilibrium to mainstream matching models.
A pair of posts that uses principal component analysis and dynamic equilibrium to get a picture of employment by industry/sector.
A pair of posts that offer a simple 1st-order explanation of income and consumption data. In the second one I talk about methodology. While I enjoy writing the posts on methodology, I'm not really as proud of or interested in it. I think a good analogy is with movies. I like being a director (making movies), but sometimes I review other movies. The posts on methodology all tend to be "bad reviews" which are fun to write (and read). They're just not fulfilling as doing interesting original work. 
In this pair of posts I derive the asset pricing equation and the Euler equation from a maximum entropy argument. They both show that the utility description is a bit superfluous, and since they require maximum entropy distributions imply that correlations in agent choices will cause them both to fail. This work is utilized in [11] below.
I show that you can capture the "Kaldor facts" about growth as information equilibrium relationships. I also show that the data is better captured by different relationships.
In order to show that information equilibrium isn't too far afield from mainstream economics, I derived the simple New Keynesian three equation DSGE model in this series using maximum entropy and information equilibrium ‒ or at least something close to it with meaningful differences due the information equilibrium approach.
Here, the IT model performs much better than far more complex models and policy heuristics using the same tests that economists used.
There is a very interesting connection between information equilibrium and causal entropy (which was referred to as "a new equation for intelligence" in the author of the paper's TED talk).
This post itself is terrible and consists mainly of notes. However, it might provide an explicit example of the algorithm that the market represents and has a direct connection to information equilibrium. Instead of a rational agent argument where money represents utility, we would transition to an argument where prices represent flows of information.
My "take down" of prediction markets and the idea that markets perform an information aggregating function rather than just solving an allocation problem. This was actually a summary of the original work on prediction markets (see e.g. here) that never went anywhere for me in my regular job (no one wanted to fund it), but did lead to this blog. Basically, this post was part of my notes when I put up these three posts ‒ the original application was going to be metrics for prediction markets.
...

Update 24 April 2017: As I mentioned here, writing this post inspired this graphic that captures the interconnections described above: 


Saturday, April 22, 2017

Good ideas do not need lots of invalid arguments in order to gain public acceptance

There's a certain vein of economic criticism that has a tendency to turn me right back into a defender of the mainstream status quo. An example of it was written by Kate Raworth and published a couple of weeks ago in the Guardian. Prof. Raworth begins by saying modern economics is born of "physics envy". To some degree this is true (Irving Fisher's thesis advisor was the physicist Willard Gibbs), but really modern economic methods were born of the incredibly useful Lagrange multiplier approach that had its first applications in physics but is far more general. It basically provides a way of figuring out the optima of a function given complex constraints. In fact, it is so general, it's used in just about every scientific field including Raworth's ideal of evolutionary biology. Here's an example that took me a few seconds of Googling:

From Evolutionary Biology, Volume 27
edited by Max K. Hecht, Ross J. MacIntyre, Michael T. Clegg

Raworth comments:
Their mechanical metaphor sounds authoritative, but it was ill-chosen from the start – a fact that has been widely acknowledged since the astonishing fragility and contagion of global financial markets was exposed by the 2008 crash. ... So if the economy is not best thought of as a mechanism that returns to equilibrium and follows fixed laws of motion, how should we think of it? Like the living world: it’s complex, dynamic and ever-evolving.
As I just showed, this "mechanical" metaphor also applies to evolutionary biology and ecosystems sometimes collapse so there goes that argument. I'd bet that there are a lot more examples if Raworth wanted to look into this more than not at all. Maybe her enthusiasm comes from some sort of false belief that evolutionary biology doesn't have a lot of math in it?

Her comments also reference the 2008 crash as if this somehow invalidates anything about mainstream economics. In order to say the 2008 crash could be foreseen, prevented, or mitigated for certain using an evolutionary economics "gardening" [1] approach  requires a much more established and validated evolutionary macroeconomic theory than exists today. It's a bit like asking physicists why they didn't understand the solar neutrino problem before SNO and saying the answer is obvious but not providing any details of the requisite neutrino oscillation theory and evidence supporting it [2]. This issue with Raworth's "secret theory with secret evidence that supports it" reaches its zenith (nadir?) when she says:
The most pernicious legacy of this fake physics has been to entice generations of economists into a misguided search for economic laws of motion that dictate the path of development. People and money are not as obedient as gravity, so no such laws exist.
Either she has a secret theory with secret evidence, or has a time machine enabling her to see the future where this was either proven or humans all died off without finding them. More likely she is just repeating one of the age-old criticisms of science. "This system is too complex for your silly math" (or really that God/Zeus/whoever couldn't be constrained by human mathematical laws) was the same criticism leveled at the founding practitioners of science. People looked at nature and saw a mess; people did not believe it could be understood with simple laws and instead went with mythological explanations that used their intuition about human behavior. Thales was one of the earliest known people to say that nature may well be messy, but it might be amenable to rational argument. Imagine if people had listened to an ancient Greek version of Raworth saying "nature is complex, so mathematics and geometry will be useless". 

Following Thales example, I refuse to listen to Raworth's completely unsupported claim that no laws of macroeconomics exist. I've called claims like Raworth's the "failure of imagination fallacy" (but is also known as an argument from incredulity). It is odd that this total pessimism can come from the same source as the unbridled (and unsupported) optimism for the evolutionary approach.

Like most scientists, I would totally get on board with "evolutionary economics" if it had some useful results or evidence in its favor. But paraphrasing Daniel DaviesGood ideas do not need lots of invalid arguments in order to gain public acceptance.

Again, I've discussed this before with regard to David Sloan Wilson.

Footnotes:

[1] Any time I hear the economist as gardener metaphor, it makes me think of this:


[2] For those that might have difficulty following my convoluted physics metaphor, the solar neutrino problem is the financial crisis, SNO is the experiment that eventually confirms whatever theory of the financial crisis is correct, and Raworth's evolutionary economics purports to be the confirmed neutrino oscillation theory. Raworth and other evolutionary economics proponents have not provided any evidence (or any theory for that matter) that their approach is useful or empirically accurate e.g. by showing they could prevent/mitigate/forecast the financial crisis before it happened or really explain any aspect of macroeconomic data at all.

Friday, April 21, 2017

Economics to physics phrasebook


Cameron Murray tweeted out his old post about economics terminology. Someone commented adding a sociology-economics dictionary. I thought I'd get in on the game with an "economics to physics" dictionary (the $^{*}$ means we're using the economics definition):

$$
\begin{array}
\text{Economics} & Physics \\
\text{commodity} & \text{matter} \\
\text{comparative advantage} & \text{coupling state spaces changes equilibrium}\\
\text{discount factor} & \text{smooth cutoff regulator} \\
\text{DSGE} & \text{arbitrary stochastic difference equation} \\
& \text{with constraints} \\
\text{demand} & \text{unobservable field that interacts with supply}^{*} \\
\text{endogenous} & \text{not an external field} \\
\text{equilibrium} & \text{solution to set of equations} \\
\text{general equilibrium} & \text{solution to all of the equations} \\
\text{partial equilibrium} & \text{solution to a few of the equations} \\
\text{estimate} & \text{fit parameters to data} \\
\text{Euler equation} & \text{equation of motion for constrained Lagrangian} \\
\text{exogenous} & \text{external field} \\
\text{expectations} & \text{toy model}^{*}\;\text{of the future} \\
E_{t} & \text{time translation operator} \\
\text{EMH} & \text{Brownian motion as an effective theory of prices} \\
& \text{(scale unclear)} \\
\text{inflation} & \text{rate of change of the price level}^{*} \\
\text{price level} & \text{arbitrary linear combination of prices} \\
& \text{(varies)} \\
\text{economics} & \text{theoretical economics} \\
\text{growth economics} & \text{theoretical economics} \\
& n \gg 1, t \gg 1 \;\text{quarter} \\
\text{macroeconomics} & \text{theoretical economics} \\
& n \gg 1, t \sim 1 \;\text{quarter}  \\
\text{microeconomics} & \text{theoretical economics} \\
& n \sim 1, t \ll 1 \;\text{quarter}  \\
\text{natural rate of interest} & \text{unobservable field} \\
\text{model} & \text{toy model} \\
\text{toy model} & \text{handwaving} \\
\text{theory} & \text{philosophy} \\
\infty & \text{sometimes inexplicably has units} \\
\text{growth} & \text{exponential growth} \\
\text{money} & \text{toy model}^{*}\;\text{of money} \\
\pi & \text{inflation}^{*} \; \text{, not}\; 3.14159...\\
\text{total factor productivity} & \text{phlogiston} \\
\text{rational expectations} & \text{causality violating toy model}^{*}\;\text{of the future} \\
\text{recession} & \text{toy model}^{*}\;\text{of a recession} \\
\text{supply} & \text{observable field} \\
\text{utility} & \text{unobservable field}
\end{array}
$$

Housing prices over the long run (are we in a boom?)

Kevin Drum has a piece at Mother Jones (H/T David Anderson) that posits we are in the midst of another housing boom in the US, calling it the second biggest on record according to the Case-Shiller index. [Update: Brad DeLong comments on Drum, proposing a boom, bust, overshoot, rebound mechanism discussed below.] Now I've previously looked at the index using the dynamic equilibrium model (see also this presentation), but only back to the late 70s because that was what was available on FRED. Using Shiller's data from his website, we can now go back to the late 1800s.

Drum uses "real" values, adjusting for inflation. This creates some issues if your measure of inflation is mis-matched with respect to your nominal values (as described in an example here, and see addendum below), so we're going to focus on nominal values.

First, Drum seems to be correct about the boom ‒ assuming that we returned to the long run dynamic equilibrium, we should be at a much lower level (in everything that follows, blue is the model and yellow is the data):



In the second graph we show the 9 shocks that best describe the data (7 booms, 2 busts):


The two busts are associated with the Great Depression (1930.0) and the Great Recession (2008.5). Also of interest ‒ the Great Recession was twice as bad for housing prices as the Great Depression (in relative terms): 0.34 vs 0.78. The equilibrium rate of growth in the absence of shocks is ~ 1% per year.

According to the model, the latest boom appears to be ending, being centered in 2014.0 with a duration (width) of 1.5 years (think of this as the two-sided 1-sigma width of the Gaussian shock). It is not the largest boom in relative size (actually the smallest, but comparable to the 1917.9, 1970.0, and 1986.7 booms). However, with only the leading edge of the boom visible in the data, the duration and size are somewhat uncertain:


Going back to David Anderson's tweet up at the top, my own anecdotal evidence confirms this second boom as housing prices in the Seattle area have increased dramatically. Some other observations:
  • It does not appear that any housing booms are necessarily unsustainable. Out of seven booms, there have been only two busts. And the only two busts were during the Great Depression and Great Recession. While it may be that the housing bubble contributed to the latter, that leaves a sample of one on which to base conclusions like "unsustainable housing boom causes recessions when they bust". In fact, of the three largest booms, only one was followed by a bust.
  • In the absence of shocks, housing prices increase at about 1% nominally per year meaning that in the absence of shocks, housing would naturally become more affordable if wages kept up with inflation. It is housing price booms that make housing unaffordable.
*  *  *

Addendum: Inflation

Shiller deflates his housing prices by the consumer price index (all items). If we apply the above analysis to his data, we find a roughly similar structure:



Update + 3 hours: forgot the inflation rate graph.

However there is no CPI bust accompanying the Great Recession like for the Great Depression. Also, CPI has a shock in 1974 likely due to the oil crisis. That leaves us with 8 shocks (7 booms, 1 bust):


As you can see, the centers and durations (σ in the figure below) are mis-matched, yielding the fluctuating effect discussed in this post when you subtract the inflation shocks from the nominal housing price shocks:


Of the 7 shocks that match between the nominal index and the CPI index, all have some serious mismatch with the closest match being the 1970/1969 shock:

Center [y] (duration [y]) (size/amplitude [rel])

  1917.9 vs 1917.5 (1.7 vs 1.2) (0.3 vs 0.5)
  1930.0 vs 1930.5 (1.7 vs 3.4) (0.3 vs 0.5)
  1945.8 vs 1945.5 (3.1 vs 3.4) (0.9 vs 0.4)
  1970.0 vs 1969.4 (1.8 vs 1.8) (0.2 vs 0.2)
  1978.0 vs 1979.8 (3.0 vs 2.2) (0.8 vs 0.5)
  1986.7 vs 1989.7 (1.6 vs 4.4) (0.3 vs 0.3)
  2004.8 vs 2004.2 (5.4 vs 4.8) (1.3 vs 0.1)

These mis-matches yield fluctuations in the "real" price index (I am showing 4 cases = 1917, 1970/69, 1978/79, and 1986/89):


In fact, the latter two are close enough together that they combine into a series of two booms and two busts (in the real data) when in fact they are just two booms (in the nominal data and cpi):



The net effect is to make it look like the 1970s boom and the 1980s boom were followed by busts when neither the nominal price index nor the CPI have "busts". Another way to put this is that (in nominal terms) 1 boom + 1 boom = 2 booms + 2 busts (in real terms).

Therefore it is questionable whether the real housing price is the best measure during shocks. Outside of shocks, the real housing price shows how much housing increases relative to inflation (i.e. most goods and some fixed income). In fact, the dynamic equilibrium shows CPI grows at about 1.5% per year. This means that real housing prices in the absence of shocks decrease at about 0.5% per year (note that gold's nominal price decreases at about 2.7%). This is apparent in Drum's graph of Shiller's data in the periods between the booms. Combining the above descriptions of the nominal price index and the CPI, we have a (remarkably good) model of the real Case-Shiller index:


Note that the real index lacks a strong visible shock for the Great Depression (!) (the period from 1917 to 1945 shows relatively stable housing prices) whereas in the nominal index, it is one of the only negative shocks. 

Because the above model for the nominal index sees the most recent boom ending, we should see a return to the 0.5% decrease per year. But again, figuring out the size of booms and busts before they occur is generally difficult and fraught with uncertainty (for example, see these estimates of the unemployment rate ‒ the model appears to under-predict at first, then over-predict so we should probably take this as an under-prediction of the size of the current boom).

Update 22 April 2017

Brad DeLong comments on Drum's article, saying that we might be experiencing a rebound from too far of a collapse post-financial crisis. Now it doesn't appear that the pre-2000s housing bubble equilibrium has been restored:


We're still a ways above that, so is seems unlikely. However, if part of the 2000s housing bubble was sustainable, then it is possible:


This does lend itself to an overshoot, bust, undershoot, and return equilibrium picture. However it requires an ad hoc decomposition of the 2000s bubble into a "sustainable" and an "unsustainable" component. And there is insufficient data (only 2 busts: the Great Depression and Great Recession) to start making up theories for complex decomposition/dynamics involved in a boom-bust cycle.

The simpler model (with independent booms and busts) also says something similar ‒ a flattening of nominal housing price increases and a return to the (dynamic) equilibrium of 1% increase per year. Therefore Occam's razor should come in to play again: the simpler explanation is better ... at least until we get more data.

Thursday, April 20, 2017

Growth regimes, lowflation, and dynamic equilibrium

David Andolfatto points out how different models frame the data:
What does Bullard have in mind when he speaks of a low-growth "regime?" The usual way of interpreting development dynamics is that long-run growth is more or less stable and that deviations from this stable trend represent "cyclical" mean-reverting departures from trend. And if it's "cyclical," then it's temporary--we should be forecasting reversion to the mean in the near future--like the red forecasting lines in the picture below. ... This view of the world can lead to a series of embarrassing forecast errors. Since the end of the Great Recession, for example, you would have forecast several recoveries, none of which have materialized.  ... But what if that's not the way growth happens? Suppose instead that growth occurs in decade-long spurts? Something like this [picture]. ...
The two accompanying pictures are here:


As you can see, interpreting data depends on the underlying model. I've talked about this before, e.g. here or here. Let's try another!

What about dynamic equilibrium (see also here)? In that framework, we have a shock centered in the late 70s that hits both NGDP per capita (prime age) and the GDP deflator:


At this resolution, there is another shock to NGDP alone (although it might be visible in the deflator data, see here, but it's not relevant to the discussion in this post). Note: I am talking about quantities per capita (prime age) so it should be understood if I leave off a p.c. in the following discussion. The figure shows the transition locations as well as the width (red). The NGDP p.c. transition is much wider than the deflator transition. Combining these (dividing the NGDP p.c. model by the deflator model), you get RGDP per capita:


The lines represent the "dynamic equilibrium" for RGDP p.c. made from the dynamic equilibria for NGDP p.c. minus the GDP deflator. I translated it up and down tot he maximum and minimum during the period as well as for recent times. You can see how the interaction between two Gaussian shocks of different widths give you an apparent fluctuating growth rate, which is what Bullard/Andolfatto see in the data:

It's actually just the mis-match between the NGDP shock and the GDP deflator shock (likely due to women entering the workforce) that makes it look like different growth regimes when in fact there is just one. If the shocks to each measure were exactly equal, there'd be no change. Therefore it is entirely possible these "growth regimes" are just artefacts of mis-measuring the price level (deflator/inflation) data ‒ that a proper measurement of the price level would result in no changes (since NGDP and the deflator would be subject to the same shocks).

In fact, a LOESS smoothing (gray solid) of the RGDP growth data (blue) almost exactly matches the dynamic equilibrium (blue) result during the 70s and 80s:


In this graph the gray horizontal lines are at zero growth and the dynamic equilibrium growth rate (1.6%,  equal to the dynamic equilibrium growth rate of NGDP = 3.6% minus the dynamic equilibrium growth rate of the deflator/inflation = 2%). We can see that we were at the dynamic equilibrium in the 1950s and the early 2000s as well as today. The other times, we were still experiencing deviations due to the shock. 

I also show Andolfatto's 10-year annualized average growth rate (gray dotted), which basically matches up with a 10-year shifted version of the LOESS smoothing.

I'd previously talked about Bullard's regime-switching approach here. In that post, I showed how the information equilibrium approach reverses the flow of the regime selection diagram. But I also talked about how the information equilibrium monetary models can be divided in to "high k" and "low k" regimes (k is the information transfer index). High k is essentially the effective quantity theory of money for high inflation, whereas low k means the ISLM model is a good effective theory for low inflation (or we just have something more complex as I discuss in the quantity theory link). This means that monetary policy would be more effective in a high inflation environment than in a low inflation environment. I've also discussed "lowflation" regimes before here.

This brings up another topic. On Twitter, Srinivas pointed me to a new SF Fed paper [pdf] on monetary policy effectiveness that comes to similar conclusions based on the data: there are low inflation regimes where monetary policy is less effective than in high inflation regimes.


Actually, as indicated by one of the graphs in my reply, I've been discussing this since the first few months of this blog (almost 4 years ago).

One difference between the inflation (i.e. k) regimes and Bullard's regimes is that there isn't "switching" so much as a continuous drift. You don't go from high k to low k in a short period, but rather continuously past through moderate k values over a few decades.

Is there a way to connect lowflation to dynamic equilibrium? Well, one possibility is that we only have "high k" during shocks but we lack enough macroeconomic data to be able to see this clearly – the shock from the first half of the post-war period has only faded out recently.

However, this would make more sense of the fact that all countries haven't reached low k in the partition function/ensemble/statistical equilibrium picture. It's a question that has floated around in the back of my mind for awhile ‒ ever since I put up this picture (from e.g. here):


The problem is evident in that light green US data as it comes from the Depression. That means the US was once at "low k", but then went to "high k" in the WWII and post-WWII era and has since steadily fallen back to low k. The problem is that while the ensemble approach can handle the drift towards lower k values (i.e. the expected value of k falls in an ensemble of markets as the factors of production increase), the mechanism for increasing k involves ad hoc modeling (e.g. exit/reset through wartime hyperinflation).

However, what if shocks (in the dynamic equilibrium sense) reset k to higher values (in the ensemble sense)? If we take this view, then there might be different growth "regimes", but they split into "normal" and "shock" periods (the red bands in the graphs above). The shock periods can have different dynamics depending on the shocks (e.g. the fluctuating RGDP due to the mis-match between the shock to the price level and the shock to NGDP). Outside of these periods, we have "normal" times characterized by e.g. a constant RGDP growth (the gray line described in the graph above).

Which view is correct?

Given the quality of the description of the data using the dynamic equilibrium model, I don't think Bullard's regimes capture it properly. We have a shock that includes both high and low growth, but the low growth regimes on either side of the shock (today and the 1950s) represent the "normal" dynamic equilibrium (the low RGDP growth period of the 1970s wasn't the dynamic equilibrium, but rather just a result of our measure of the GDP deflator and definition of "real" quantities). This is evident from the good match between the RGDP data and the theoretical curve that is just NGDP/GDPDEF (NGDP divided by the GDP deflator). NGDP and the deflator have one major shock in the 1970s that turns into a fluctuating growth rate simply because the difference of two Gaussians [1] with different widths fluctuates:


The two high growth regimes and the intervening low growth regime are simply due to this. Occam's razor would say that there is really just one shock [2] with different widths for the different observables centered in the late 70s instead of three different manifestations of two growth regimes (per Bullard).


Footnotes:

[1] The derivative of the step function in the dynamic equilibrium is approximately a Gaussian function (i.e. a normal distribution PDF), and when you divide NGDP by DEF and look at the log growth rate you end up with the difference of the two Gaussians.

[2] This is the same shock involved in interest rates, inflation, employment-population ratio, etc so we should probably attribute it to a single source instead of more complex models (at least without other information).