Friday, July 28, 2017

Dynamic equilibrium and ensembles (and collected results)


I previously worked out that ensembles of information equilibrium relationships have a formal resemblance to a single aggregate information equilibrium relationship involving the ensemble averages:

$$
\frac{d \langle A \rangle}{dB} = \langle k \rangle \frac{\langle A \rangle}{B}
$$

I wanted to point out that this means ensemble ratios and abstract prices will exhibit a dynamic equilibrium just like individual information equilibrium relationships if $\langle k \rangle$ changes slowly (with respect to both $B$ and now time $t$):

$$
\frac{d}{dt} \log  \frac{\langle A \rangle}{B} \approx (\langle k \rangle - 1) \beta
$$

plus terms $\sim d\langle k \rangle /dt$ where we assume (really, empirically observe) $B \sim e^{\beta t}$ with growth rate $\beta$. The ensemble average version allows for the possibility that $\langle k \rangle$ can change over time (if it changes too quickly, additional terms become important in the solution to the differential equation as well as the last dynamic equilibrium equation).

Generally, considering the first equation above with a slowly changing $\langle k \rangle$, we can apply nearly all of the results collected in the tour of information equilibrium chart package to ensembles of information equilibrium relationships. These have been described in three blog posts:
1. Self-similarity of macro and micro 
Derives the original ensemble information equilibrium relationship 
2. Macro ensembles and factors of production 
Lists the result for two or more factors of production (same result gives matching models) 
3. Dynamic equilibrium and ensembles 
The present post arguing the extension of the dynamic equilibrium approach to ensemble averages

Thursday, July 27, 2017

Adding race and gender to macroeconomics

Narayana Kocherlakota has an article at Bloomberg View about how macroeconomists can't keep ignoring race and gender ‒ something I agree with. In fact, I believe that ignoring race and gender has lead to two major misunderstandings in macroeconomics ... two misunderstandings that can be clarified by use of the dynamic equilibrium model.

Women entering the workforce

There are a variety of explanations of the so-called "Great Inflation" of the 60s and 70s, some monetary, some focused on government spending. However, due to the strong connection between labor force growth and inflation (see also the piece on demographic inflation by Steve Randy Waldman aka Interfluidity), it seems likely that the long non-equilibrium process of women entering the workforce in the 1960s and 70s is the causal factor. The main shock to the civilian labor force participation is dominated by the effect of women getting jobs and the employment-population ratios for men and women show different general structures in terms of dynamic equilibrium:


In fact, using the charts from here to display the shocks (shown as vertical lines in the graphs above) and their width (duration), we can see that the shocks to the labor force participation and to the employment population ratio for women precede the shocks to inflation using various measures:


[added in update] Positive shocks to the measure in blue, negative shocks in red. (Note that the increase in labor force participation for women consists of a long positive shock with a few negative shocks corresponding to recessions that aren't shown.)

Racial disparities in unemployment

Another area where macro without race and gender leads to misunderstanding is in unemployment rate dynamics. Ordinary observation of unemployment statistics leads Kocherlakota to write:
Arguably the most important is that blacks ‒ especially black men ‒ are much more likely to lose their jobs. This risk of job loss is highly cyclical, which is why blacks fare so much worse than whites during recessions. For example, the black unemployment rate peaked at nearly 17 percent after the Great Recession, compared with just over 9 percent for whites.
The wrong framework (and general lack of including race and gender) leads Kocherlakota to the wrong diagnosis in this case. The problem is not necessarily a dynamic one (i.e. due to black losing jobs more than whites), but rather one of hysteresis [1]. The overall dynamics for black and white unemployment are approximately the same (with this model indicating a similar matching function).


In the graph above, the black dynamic equilibrium is applied to white unemployment with the only difference being the starting value (about 5% instead of 10%). The model describes both sets of data roughly equally well indicating that the issue is initial conditions (slavery, Jim Crow), not present day dynamics. This hysteresis is caused by the fact that unemployment declines at the same relative rate for both black and white workers and both are subjected to the same shocks to the macroeconomy.

One way to imagine this is as two airplanes flying from Seattle to Chicago, with one taking off about an hour later than the other. Since both planes are subjected to the same wind conditions (macro shocks), the plane taking off later never catches up. In this case, the solution required is different from the solution to the problem as diagnosed by Kocherlakota: one would need to either make macro shocks affect black workers less, or increase employment through increased hiring. We are talking about something akin to reparations: Black Americans need to be compensated for being kept out of jobs by racist policies of the past.

Just two examples

Those are just two examples I've seen in my work with the dynamic equilibrium model, but they're definitely not the only ones. The "Great Inflation" sent macroeconomics off on a wild goose chase ending in DSGE models that can't forecast, attributing the inflation to central bank policy (which continues to this day). If it had been understood at the time that a certain amount of inflation would be inevitable because of women entering the workforce, the history of past 40 years of macroeconomics might have been different.

...

Update: changed EPOP graphs to have the same x- and y-axis. The original graph is here:


...

Footnotes:

[1] When I say hysteresis, I am in no way saying discrimination has ended. For example, being employed does not tell us whether someone is underemployed or paid less for the same job.

Macro ensembles and factors of production


I was inspired by Dietrich Vollrath's latest blog post to work out the generalization of the macro ensemble version of the information equilibrium condition [1] to more than one factor of production. However, as it was my lunch break, I didn't have time to LaTeX up all the steps so I'm just going to post the starting place and the result (for now).

We have two ensembles of information equilibrium relationships $A_{i} \rightleftarrows B$ and $A_{j} \rightleftarrows C$ (with two factors of production $B$ and $C$), and we generalize the partition function analogously to multiple thermodynamic potentials (see also here):

$$
Z = \sum_{i j} e^{-k_{i}^{(1)} \log B/B_{0} -k_{j}^{(2)} \log C/C_{0}}
$$

Playing the same game as worked out in [1], except with partial derivatives, you obtain:

$$
\begin{align}
\frac{\partial \langle A \rangle}{\partial B} = & \; \langle k^{(1)} \rangle \frac{\langle A \rangle}{B}\\
\frac{\partial \langle A \rangle}{\partial C} = & \; \langle k^{(2)} \rangle \frac{\langle A \rangle}{C}
\end{align}
$$

This is the same as before, except now the values of $k$ can change. If the $\langle k \rangle$ change slowly (i.e. treated as almost constant), the solution can be approximated by a Cobb-Douglas production function:

$$
\langle A \rangle = a \; B^{\langle k^{(1)} \rangle} C^{\langle k^{(2)} \rangle}
$$

And now you can read Vollrath's piece keeping in mind that using an ensemble of information equilibrium relationships implies $\beta$ (e.g. $\langle k^{(1)} \rangle$) can change and we aren't required to maintain $\langle k^{(1)} \rangle + \langle k^{(2)} \rangle = 1$.

...

Update 28 July 2017

I'm sure it was obvious to readers, but this generalizes to any number of factors of production using the partition function

$$
Z = \sum_{i_{n}} \exp \left( - \sum_{n} k_{i_{n}}^{(n)} \log B^{(n)}/B_{0}^{(n)} \right)
$$
where instead of $B$ and $C$ (or $D$), we'd have $B^{(1)}$ and $B^{(2)}$ (or $B^{(3)}$). You'd obtain:

$$
\frac{\partial \langle A \rangle}{\partial B^{(n)}} = \; \langle k^{(n)} \rangle \frac{\langle A \rangle}{B^{(n)}}
$$

Gross National Product

I looked at NGDP data in the past with the dynamic equilibrium model (see here [1], and here [2]), however the annual time series on GNP data at FRED goes back a bit further in time and includes the onset of the Great Depression. Here are the results, first using the housing and stock market "bubble" frame for 1990s-2000s, and then the "no knowledge" frame (discussed in [1]):



Here are the GNP growth rates:



It will be interesting to see which one is the better model. The latter suggests a potential "demographic" shift of e.g. baby boomers leaving the workforce over a ten year period centered around 2014.

Self-similarity in dynamic equilibrium

Let me say up front I am not saying the idea that stock market price time series are self-similar is new. What's new is that a specific structure (i.e. the dynamic equilibrium + shocks) appears at different scales. Here we steadily zoom in on the S&P 500 from a multi-year timescale, to a few years, to on the order of a year, down to months (discovering new shocks at smaller and smaller scales):





Wednesday, July 26, 2017

Updating Samuelson's family tree ...


PS This was a bit tongue in cheek.

A dynamic equilibrium history of the United States


In writing the previous post, I got the idea of collecting all of the dynamic equilibrium results for the US into a single "infographic". It was also inspired by the recently scanned 75 Years of American Finance: A Graphic Presentation, 1861-1935, the 85-foot long detailed timeline compiled by Merle Hostetler in 1936 available at FRASER.

Hopefully the chart is fairly self-explanatory: positive shocks in blue, negative shocks in red, recessions in beige. The "tapes" indicate the range of data analyzed with the dynamic equilibrium model. The widths of the bars are proportional to the widths of the shock (roughly, the 1-sigma width).

U is the unemployment rate. CLF is the civilian labor force (participation rate). EPOP is the employment-population ratio (for men and women). PCE is the personal consumption expenditures price index. CPI is the consumer price index (all items), and C-S is the version available with the Case-Shiller data (see here). The Case-Shiller housing price index itself is included along with the S&P 500. AMB stands for adjusted monetary base.

The arrow indicates the non-equilibrium process of women entering the workforce (where I didn't try to decouple the recession shocks from the broad positive shock).

Here is a zoomed-in version of the post-war period:


I'll leave the possible narratives to comments.

...

Update 4pm

I added the multiple shock version of the PCE dynamic equilibrium discussed in this post on the Phillips Curve. We can resolve the main shock of the 1970s into several shocks; these are shown in purple:


Additionally, we can see the "vanishing" Phillips Curve:


Unemployment shocks are preceded by PCE inflation shocks such that as unemployment recovers from the previous shock, inflation rises (unemployment goes down, inflation goes up, and because an inflation shock is ending when unemployment rises, inflation is going down when unemployment is going up). That goes for the recessions of the 70s, 80s and 90s. However, the early 2000s recession doesn't have a distinct shock (at least one that the algorithm can find in the data), and the Great Recession is preceded by a fairly small shock relative to the previous ones.

And even if it wasn't fading away, the causality is uncertain here. It could well be from unemployment to inflation and not the other way around (i.e. low unemployment causes inflation, but inflation doesn't cause unemployment).

...

Update 27 July 2017

I updated all the figures with more accurate versions of the widths and years (I had read some of them off the relevant graphs). I also increased the font size a bit and the image sizes because some of the lines are too narrow to show up except on a big image.

Tuesday, July 25, 2017

Causality in money and inflation ... plus some big questions

I noticed that the monetary base had what looks like a series of stepped transitions, so I tried the dynamic equilibrium model out on the data. The description is decent:



I am showing the results assuming a dynamic equilibrium growth rate μ of zero, but I also tried the entropy minimization and found that μ ~ 1.6%. It doesn't strongly change the results either way, so we can reasonably say that monetary base growth dynamic equilibrium is close to zero.

I imagine many readers of econ blogs out there spitting out their coffee and saying: "Close to zero?!!" Yes, the monetary base in the absence of shocks grows about as fast as PCE inflation in the absence of shocks (Ï€ ~ 1.7%).

Aha! So, that's basically the quantity theory of money, right?

Well, no.

The interesting piece comes from the big shock in the middle part of the twentieth century. I've looked at several models of several macroeconomic observables that have this major shock, and we can play a game of "one of these things is not like the others":

NGDP [added in update]
1976.6

NGDP/L
1977.5 ± 0.1

CPI
1977.7 ± 0.1

PCE
1978.3 ± 0.1

(Prime age) CLF
1978.4 ± 0.1

AMB [this post]
1985.2 ± 0.1

The center of the monetary base shock comes well after the inflation, labor force participation, and output per employee. So while the PCE inflation rate and the monetary base growth rate match up in equilibrium (both ~ 1.6-1.7%), shocks to inflation are *followed* by shocks to the monetary base. Causality appears to go from inflation to "money printing", not the other way around.

As a side note, the slowdown in monetary base growth preceding the Great Recession that is used as part of the case for the claim that the Fed caused the recession (by e.g. Scott Sumner) is actually just the end of this large transition/shock in the monetary base.

I have an hypothesis that instead of Fed policy, the labor force growth slowdown might be behind the various bubbles (dot-com, housing) and financial crises of the past few decades. Much of the growth people were accustomed to in the 60s, 70s, and 80s was due to women entering the workforce (enhanced by more minorities in the workforce due to Civil Rights legislation, and by the post WWII baby boom). As this surge in labor force participation faded in the 1990s reaching its new equilibrium, investors looked for new (and potentially risky) sources of growth. This lead to bubbles and crashes as investors sought to maintain the rates of asset growth once supported by a growing labor force.

I am also working on an hypothesis that the Great Depression was caused by similar factors, except in that case it was the agriculture-industry transition that was ending. There was a analogous surge in labor force participation (including a surge in women entering the workforce) in the 1910s and 20s that is apparent in census data (see e.g. here or here).

The question arises: what does a "normal" economy look like? How does an economy that isn't undergoing some major shock (demographic or otherwise) function? I wrote about this before, and I think the answer is that we don't really know as there's no data. More and more I'm convinced we're flying blind here.

Wednesday, July 19, 2017

What mathematical theory is for

Blackboard photographed by Spanish artist Alejandro Guijarro at the University of California, Berkeley.

In the aftermath of the Great Recession, there has been much discussion about the use of math in economics. Complaints range from "too much math" to "not rigorous enough math" (Paul Romer) to "using math to obscure" (Paul Pfleiderer). There are even complaints that economics has "physics envy". Ricardo Reis [pdf] and John Cochrane have defended the use of math saying it enforces logic and that complaints come from people who don't understand the math in economics.

As a physicist, I've had no trouble understanding the math in economics. I'm also not averse to using math, but I am averse to using it improperly. In my opinion, there seems to be a misunderstanding among both detractors and proponents of what mathematical theory is for. This is most evident in macroeconomics and growth theory, but some of the issues apply to microeconomics as well.

The primary purpose of mathematical theory is to provide equations that illustrate relationships between sets of numerical data. That what Galileo was doing when he was rolling balls down inclined planes (comparing distance rolled and time measured with water flowing), discovering distance was proportional to the square of the water volume (i.e. time).

Not all fields deal with numerical data, so math isn't always required. Not a single equation appears in Darwin's Origin of Species, for example. And while there exist many cases where economics studies unquantifiable behavior of humans, a large portion of the field is dedicated to understanding numerical quantities like prices, interest rates, and GDP growth.

Once you validate the math with empirical data and observations, you've established "trust" in your equations. Like a scientist's academic credibility letting her make claims about the structure of nature or simplify science to teach it, this trust lets the math itself become a source for new research and pedagogy.

Only after trust is established can you derive new mathematical relationships (using logic, writing proofs of theorems) using those trusted equations as a starting point. This is the forgotten basis in Reis' claims about math enforcing logic. Math does help enforce logic, but it's only meaningful if you start from empirically valid relationships.

This should not be construed to require models to start with "realistic assumptions". As Milton Friedman wrote [1], unrealistic assumptions are fine as long as the math leads to models that get the data right. In fact, models with unrealistic assumptions that explain data would make a good scientist question her thinking about what is "realistic". Are we adding assumptions we feel in our gut are "realistic" that don't improve our description of data simply because we are biased towards them?

Additionally, toy models, "quantitative parables", and models that simplify in order to demonstrate principles or teach theory should either come after empirically successful models and establish "trust", or they themselves should be subjected to tests against empirical data. Keynes was wrong when he said that one shouldn't fill in values in the equations in a letter to Roy Harrod. Pfleiderer's chameleon models are a symptom of ignoring this principle of mathematical theory. Falling back to claims a model is a simplified version of reality when it fails when compared to data should immediately prompt questions of why we're considering this model at all. Yet Pfleiderer tells us some people consider this argument a valid defense of their models (and therefore their policy recommendations).

I am not saying that all models have to perform perfectly right out of the gate when you fill in the values. Some will only qualitatively describe the data with large errors. Some might only get the direction of effects right. The reason to compare to data is not just to answer the question "How small are the residuals?", but more generally "What does this math have to do with the real world?" Science at its heart is a process for connecting ideas to reality, and math is a tool that helps us do that when that reality is quantified. If math isn't doing that job, we should question what purpose it is serving.  Is it trying to make something look more valid than it is? Is it obscuring political assumptions? Is it just signaling abilities or membership in the "mainstream"? In many cases, it's just tradition. You derive a DSGE model in the theory section of a paper because everyone does.

Beyond just comparing to the data, mathematical models should also be appropriate for the data.

A model's level of complexity and rigor (and use of symbols) should be comparable to the empirical accuracy of the theory and the quantity of data available. The rigor of a DSGE model is comical compared to how poorly the models forecast. Their complexity is equally comical when they are outperformed by simple autoregressive processes. DSGE models frequently have 40 or more parameters. Given only 70 or so years of higher quality quarterly post-war data (and many macroeconomists only deal with data after 1984 due to a change in methodology), 40 parameter models should either perform very well empirically or be considered excessively complex. The poor performance ‒ and excessive complexity given that performance ‒ of DSGE models should make us question the assumptions that went into their derivation. The poor performance should also tell us that we shouldn't use them for policy.

A big step in using math to understand the world is when you've collected several different empirically successful models into a single paradigm or framework. That's what Newton did in the seventeenth century. He collected Kepler's, Galileo's, and others' empirical successes into a framework we call Newtonian mechanics.

When you have a mathematical framework built upon empirical successes, deriving theorems starts to become a sensible thing to do (e.g. Noether's theorem in physics). Sure, it's fine as a matter of pure mathematics to derive theorems, but only after you have an empirically successful framework do those theorems have implications for the real world. You can also begin to understand the scope of the theory by noting where your successful framework breaks down (e.g. near the speed of light for Newtonian mechanics).

A good case study for where this has gone wrong in economics is the famous Arrow-Debreu general equilibrium theorem. The "framework" it was derived from is rational utility maximization. This isn't a real framework because it is not based on empirical success but rather philosophy. The consequence of inappropriately deriving theorems in frameworks without empirical (what economists call external) validity is that we have no clue what the scope of general equilibrium is. Rational utility maximization may only be valid near a macroeconomic equilibrium (i.e. away from financial crises or recessions) rendering Arrow-Debreu general equilibrium moot. What good is a theorem telling you about the existence of an equilibrium price vector when it's only valid if you're in equilibrium? That is to say the microeconomic rational utility maximization framework may require "macrofoundations" — empirically successful macroeconomic models that tell us what a macroeconomic equilibrium is.

From my experience making these points on my blog, I know many readers will say that I am trying to tell economists to be more like physics, or that social sciences don't have to play by the same rules as the hard sciences. This is not what I'm saying at all. I'm saying economics has unnecessarily wrapped itself in a straitjacket of its own making. Without an empirically validated framework like the one physics has, economics is actually far more free to explore a variety of mathematical paradigms and empirical regularities. Physics is severely restricted by the successes of Newton, Einstein, and Heisenberg. Coming up with new mathematical models consistent with those successes is hard (or would be if physicists hadn't developed tools that make the job easier like Lagrange multipliers and quantum field theory). Would-be economists are literally free to come up with anything that appears useful [2]. Their only constraint on the math they use is showing that their equations are indeed useful — by filling in the values and comparing to data.

Footnotes:

[1] Friedman also wrote: "Truly important and significant hypotheses will be found to have 'assumptions' that are wildly inaccurate descriptive representations of reality, and, in general, the more significant the theory, the more unrealistic the assumptions (in this sense) (p. 14)." This part is garbage. Who knows if the correct description of a system will involve realistic or unrealistic assumptions? Do you? Really? Sure, it can be your personal heuristic, much like many physicists look at the "beauty" of theories as a heuristic, but it ends up being just another constraint you've imposed on yourself like a straitjacket.

[2] To answer Chris House's question, I think this freedom is a key factor for many physicists wanting to try their hand at economics. Physicists also generally play by the rules laid out here, so many don't see the point of learning frameworks or models that haven't shown empirical success.

Python!


I have put together the 0.1-beta version of IEtools (Information Equilibrium tools) for python (along with a demo jupyter notebook looking at the unemployment rate and NGDP/L). Everything is available in my GitHub repositories. The direct link to the python repository is:

https://github.com/infotranecon/IEtools

While I still love Mathematica, (and will likely continue to use it for most of my work here), python is free for everybody.

Tuesday, July 18, 2017

UK Unemployment 1860-1915 (dynamic equilibrium model)

In addition to challenging the dynamic equilibrium model with a longer time series of US data [1], John Handley also challenged it with the long time series UK data (available here from FRED). I focused on the pre-WWI data because I already looked at the post war data here and the interwar period is strongly affected by the "conscription" shocks seen in [1]. Anyway, the results are decent:


The centers of the shocks are at 1861.5, 1866.9, 1876.3, 1884.6, 1892.5, 1902.1, and 1908.3. I think I might split the 1902.1 shock into two shocks.

These recessions roughly correspond with the Panic of 1857, the post-civil war UK recession, the "Long Depression" (1876, 1884, 1892) beginning with the Panic of 1873. The 1902 and 1908 shocks do no correspond to recessions listed at Wikipedia (which of course is definitive and exhaustive).

Dynamic equilibrium model: CPI (all items)

With the new CPI data out last week, I updated the dynamic equilibrium model for all items I looked in this post:



The former uses 90% confidence intervals, while the latter graph uses MAD as the measure of model uncertainty since the derivative of CPI (all items) is very volatile.

Sunday, July 16, 2017

Presentation: macroeconomics and ensembles

I put together a new presentation that looks at macroeconomics, partially inspired by this blog post. It can be downloaded in pdf format from this link. [Now available as a "Twitter Talk" as well!] The rest of the slides are below the fold.




Friday, July 14, 2017

Unemployment 1929-1968 (dynamic equilibrium model)

John Handley challenged me to take on the 1930s, 40s, and 50s with the dynamic equilibrium model. While the Great Depression is fairly uncertain (because the model operates on the log of the unemployment rate, returning to the linear scale makes the bands exponentially larger for higher unemployment rates), the overall model works well:


In the interest of making the optimization run in a reasonable time, I split the data for the 30s and the 50s, keeping the 40s in common between them. Here's the fit for the 30s and 40s (illustrating that exponential increase in the width of the error bands):


Per the original thread, it is hard to see any effects of the New Deal (possibly it arrested the increase of the unemployment rate, similar to the possible effect of the various actions in 2008-9). Overall, the best thing to say is we don't know.

The remaining time period from the 40s through the 60s (to overlap with the original model) has smaller confidence intervals:


In both of these graphs we do however see a potential effect of the draft: WWII and Korea. There are two fairly large positive shocks centered in 1942.6 and 1950.5.

Another feature of the data from the 50s and 60s (also apparent in the 70s and 80s, but gradually disappears over time) is a reproducible "overshooting" effect (which I highlighted in green, scribbling on the graph):


So maybe Steve Keen's models could be useful ... for second order effects in the dynamic equilibrium framework. Whatever causes it seems to fade out by the 1990s (which coincidentally is around the time the non-equilibrium effect of women entering the workforce fades out).

...

Update 3 August 2019:

Here's more on the overshooting/step-response effect.

Thursday, July 13, 2017

Keynes versus Samuelson


I read through Roger Farmer's book excerpt up at Evonomics (and have subsequently bought the book). I tweeted a bit about it, but I think one point I was trying to make is better made with a blog post. Farmer's juxtaposition of Samuelson's neoclassical synthesis and Hick's IS-LM model made clear in my mind the way to understand "what went wrong":
The program that Hicks initiated was to understand the connection between Keynesian economics and general equilibrium theory. But, it was not a complete theory of the macroeconomy because the IS-LM model does not explain how the price level is set. The IS-LM model determines the unemployment rate, the interest rate, and the real value of GDP, but it has nothing to say about the general level of prices or the rate of inflation of prices from one week to the next. 
To complete the reconciliation of Keynesian economics with general equilibrium theory, Paul Samuelson introduced the neoclassical synthesis in 1955. According to this theory, if unemployment is too high, the money wage will fall as workers compete with each other for existing jobs. Falling wages will be passed through to falling prices as firms compete with each other to sell the goods they produce. In this view of the world, high unemployment is a temporary phenomenon caused by the slow adjustment of money wages and money prices. In Samuelson’s vision, the economy is Keynesian in the short run, when some wages and prices are sticky. It is classical in the long run when all wages and prices have had time to adjust.
There are two ways to think about what it means for the IS-LM model to fail to explain the price level:

  1. It is a model of the short run -- so short that prices do not change appreciably from inflation during the observation time. Symbolically, t << 1/Ï€ such that P ~ exp(Ï€ t) ≈ 1 + Ï€ t ≈ 1. This is Samuelson's version: prices are "sticky" in the short run, but adjust in the long run.
  2. It is a model of a low inflation economy. Inflation is so low that the price level can be considered approximately constant (and therefore real and nominal just differ by a scale factor). Symbolically, d log P/dt = Ï€ ≈ 0 << γ where γ is some other growth scale such as interest rates, NGDP, or population (I'd go with the former [1]).

These two scenarios overlap in some short run (because d log P/dt ~ π t); the difference is that inflation can be low much longer than the price level can be low. This distinguishes Keynes' view of e.g. persistent slumps versus Samuelson's view of eventual adjustment. I've made the case that the IS-LM model should be understood as an effective theory derived from the second limit, not the first.

As an aside, not that Samuelson's limit doesn't make empirical sense in today's economy. Inflation is on the order of π ~ 2%, which implies a time horizon (1/π) of 50 years (and therefore IS-LM should apply for 5 or so years to an accuracy of about 10%). That's a pretty long short run.

Another point I'd like to make is that the second limit is more definitively a "macro" limit in the sense that it is about macro observables (low inflation) rather than micro observables (sticky prices). In fact, we can consider the second limit as "macro stickiness": individual prices fluctuate (i.e. aren't sticky), but the aggregate distribution is relatively constant (statistical equilibrium). We can further connect the second limit (and the potential for a persistent slump) to properties of the effective theory of the ensemble average of that statistical equilibrium (namely that 〈k〉 - 1 ≈ 0). This is all to say that the second limit is a true emergent "macro" theory that can be understood without much knowledge of the underlying agents (or another way, ignorance of how agents behave).

...

Update 19 December 2017

I eventually turned this post into an example for a presentation on macroeconomics and ensembles of markets.

...

Footnotes:

[1] This also opens up discussion of the "liquidity trap". If interest rates r are the proper scale to compare to inflation, we have an additional regime we need to understand where both inflation and interest rates are near zero (Ï€, r ≈ 0).

Wednesday, July 12, 2017

JOLTS leading indicators?

In what is becoming a mini-series on the utility of the forecasts made using the information equilibrium/dynamic equilibrium framework (previous installment here), I wanted to assess whether the Job Openings and Labor Turnover survey (JOLTS) data could be used as leading indicators of a potential recession (I looked at the various measures previously here).

The latest data for job openings seems to be showing the beginnings of a correlated negative deviation that could be the start of a downturn:


The other indicators were more mixed, so I asked myself: does one measure show a recession earlier than the others? A note before we start -- this analysis is based on a sample of one recession (JOLTS only goes back to the early 2000s), so we should take it with a bit more than a grain of salt.

I looked at the estimate of the center of the 2008 recession transition for hires rate (HIR), job openings rate (JOR), quits rate (QUR), and the unemployment rate (UNR):


The errors shown are 2 standard deviations, and the months on the x-axis are the months that the data points are for (the data usually becomes after another month or so, e.g. May 2017 data was released 11 July 2017). We can see that hires leads the pack -- i.e. the center of the hires transition precedes the other measures.

Note this is not the same thing as figuring out when a transition becomes detectable. I looked at this using unemployment rate data back in April. Two factors enter into the detectability: the width of the transition and the relative measurement noise level. While most of the data has comparable widths:


(the error bars show the parameter that determines the width of the transition), the hires data has more relative noise than the unemployment rate (think signal-to-noise ratio). This could potentially make the hires data less useful as an early detector of a recession despite being the leading observable.

With those caveats out of the way, it is possible the hires data might show the beginnings of a recession a several months in advance of the unemployment rate. Like the job openings, it is also showing a negative deviation:


However the most recent data lends support the null hypothesis of no deviation. Regardless, I will continue to monitor this.

Monday, July 10, 2017

Does information equilibrium add information?

I've been asked several times about the utility of the forecasts I make with the information equilibrium (IE) model. I came up with a good way to quantify it for the US 10 year interest rate forecast I made nearly 2 years ago. Here I've compared the IE model (free) to both the Blue Chip Economic Indicators (BCEI, for which the editors charge almost 1500 dollars per year) and a simple ARIMA process:


As you can see, the IE model represents a considerable restriction of the uncertainty relative to the ARIMA model (which is to say it adds bits of information/erases bits of randomness ‒ which explains the bad pun in the title of this post).

Also, I ran the model for the UK (belatedly fulfilling a request from a commenter). I had NGDP and monetary base data up until the start of 2017 so the prediction starts from there. I show both the longer view and the more recent view:



Sunday, July 9, 2017

The labor demand curve

John Cochrane may have written the most concise argument that economic theory should be rejected:
Economic theory also forces logical consistency that would not otherwise be obvious. You can't argue that the labor demand curve is vertical today, for the minimum wage, and horizontal tomorrow, for immigrants. There is one labor demand curve, and it is what it is.
The conclusion should be: Therefore, we must reject the concept of a fixed labor demand curve.

Overall, Cochrane seems to get the concept of science wrong. A few days ago, I noted that Cochrane does not seem to understand what economic theory is for. Now he seems to misunderstand what empirical data is for.

See, the issue is that no one is "arguing" that the labor demand curve is vertical for the minimum wage. No novel theory is being constructed to give us a vertical labor demand curve. The studies are empirical studies. The empirical studies show if there is such a thing as a labor demand curve, it must be vertical for the minimum wage. But really, all the studies show is that raising the minimum wage does not appear to have a disemployment effect. The "Econ 101" approach pictures this as a vertical labor demand curve. And that would be fine on its own.

Likewise, no one is "arguing" that the labor demand curve is horizontal for an influx of workers. Again, no novel theory is being constructed to give us a horizontal labor demand curve. The empirical studies are only showing that an influx of workers does not lower wages. The "Econ 101" approach pictures this as a horizontal labor demand curve. And that would be fine on its own.

But those two results do not exist in a vacuum. If you try to understand both empirical results with a fixed labor demand curve, then your only choice is to reject the fixed curve. You have two experiments. One shows the labor demand curve is horizontal, the other vertical. Something has to give.

*  *  *

Now there is a way to make sense of these results using information equilibrium. Here are several posts on the minimum wage (here, here, and here). The different effects of immigration versus other supply shocks are described in this post. If you are curious, click the links. But the main point is that when we make "Econ 101" arguments, we are making lots of assumptions and therefore restricting the scope of our theory. 

In order to obtain the Econ 101 result for the minimum wage and immigration, you essentially have to make the same specific assumptions (assume the same scope): 1) demand changes slowly with time and 2) supply increases rapidly compared to demand changes. The commensurate scope is the reason why the Econ 101 diagrams are logically consistent. But they're both inconsistent with the empirical data. Therefore we should question the scope. Under different scope conditions (i.e demand and supply change), information equilibrium tells us increasing the minimum wage or increasing immigration increases output ‒ meaning that you should probably accept that demand is changing in both cases. Which is the point of higher minimum wage and pro-immigration arguments: they create economic growth. 

As an aside, I think the a lot of the right-leaning economics might stem from assuming demand changes slowly. The cases I just mentioned made me think of a case where Dierdre McCloskey seems to be assuming demand changes slowly to argue against Thomas Piketty.

In the same sense that while you can obtain an isothermal expansion curve [1] from a restricted scope of an ideal gas (that scope being that temperature is constant, hence isothermal), if your data is inconsistent with the theory you should begin to question the scope (was it really isothermal?). Unfortunately, Econ 101 ‒ and for that matter much of economic theory ‒ does not examine its scope. As Cochrane says: it's about "logic". Logic has no scope. Things are either logical or illogical. That's not now science works. Some descriptions are approximate under particular assumptions (constant temperature, speeds slower than the speed of light) and fail when those assumptions aren't met.

Given empirical data requiring contradictory interpretations of theory (different labor demand curves), a scientific approach would immediately question the scope of the theory being applied. What assumptions did I make to come up with a fixed demand curve? I definitely shouldn't assume studies that contradict the theory are wrong.

...

Footnotes:

[1] Actually, in information equilibrium the Econ 101 demand curve is essentially an isodemand curve (i.e. a curve where demand is held constant/changes slowly) analogous to an isothermal process using the thermodynamic analogies. If I say the minimum wage won't decrease employment because it increases overall demand, the Econ 101 rebuttal is to come back and say "assuming demand doesn't change ...". It'd be kind of funny if it wasn't so perversely in the defense of the powerful.

Saturday, July 8, 2017

A few forecast updates

It's a nice day here in Seattle, but I think I've gotten enough sun for one day so I'm updating some forecasts. Unemployment data came out this week so there's this forecast (I extended the hypothetical recession shock [gray] a bit):


There's another forecast I haven't updated in awhile: the monetary base. There's been an additional Fed interest rate increase to show (it's now the level labeled C'''):


One thing to note with this forecast is that I'm trying to understand the process of reaching information equilibrium. The model doesn't actually tell us how a new equilibrium is reached, it just tells us what it is. What path do the variables follow? The picture above is just a stochastic linear path, but it's possible it can end up being more complex. We'll figure that out if the random walk with drift fails. And we'll figure out if the model fails if we never head back to C''' (or whatever the interest rate at the time tells us it should be).

Finally, there's this forecast of Japan's CPI using the dynamic equilibrium model:


As always, all of the forecasts I track are available at the aggregated prediction link.

...

Update 10 July 2017: Here is the 10-year interest rate forecast (originally made nearly 2 years ago here):


Update 11 July 2017: This forecast of the S&P 500 from January is basically on track: