Tuesday, October 30, 2018

Comparing my S&P 500 forecast to observations

Disclaimer/disclosure: It is entirely possible I am a crackpot physicist who has deluded himself (always a 'him', amirite?) into believing he has figured out some structure in the stock market. You most likely do not want to wager the change in your pocket on that, much less your life savings. The model presented is for purely academic curiosity purposes, not stock advice. I have my 401(k) invested in an S&P 500 index fund and own a few shares of Boeing stock.
I've been testing a dynamic information equilibrium model (DIEM) forecast of the S&P 500 (a.k.a. hubris) since January 2017, and we're now about 2 months from the original end date of December 31, 2018 [1]. It has worked remarkably well (click to enlarge):


The model is the red line, the red band are the single prediction errors (90% confidence) over the entire model data set (1950-2017), and the blue band is the 90% ARMA(2,1) forecast from the last data point (December 2017). The black line is the post-forecast data.

The current data is well within the "normal" range of fluctuations. I've added a red dashed line to indicate the "recession warning" level. Should the data exceed this threshold, the model would potentially indicate the presence of a shock (usually associated with recessions) [1]. Actually, the S&P 500 seems to have a bit of multi-scale self-similarity (a property of "fractals"), so as you zoom in more and more shocks of ever smaller magnitude and ever shorter duration can be resolved. Whether or not there's a shock associated with a recession depends a bit on the scale of the recession. All that's to say the S&P 500 isn't exactly a good recession indicator on its own (labor market measures are better and tend to be leading indicators).

I've heard questions lately in the business news about whether we are experiencing a market "correction" — a fall on the order of 10%. This probably reflects some genuinely useful heuristic. However that metric is too vague (over one week? two weeks? a month? from what level?) to be of value. The DIEM view could make that more explicit as a 10% deviation from the trend (red model line) is roughly at the bottom of the blue band [3]. It's reasonably close to the "recession warning" line so as to consider the metric as part of a more general concept that separates normal fluctuations from shocks [4]. However, the data has passed the "correction" line several times since 2010 (but not the "recession warning" line).

...

Footnotes:

[1] I've extended the forecast another year (assuming no shock, click to enlarge):


[2] Here's the longer term overview with recessions (blue) included (click to enlarge):


[3] More explicitly — actually a 0.1 log-deviation which is approximately 10% (click to enlarge):


The width of the blue band is based on the short run (i.e. no recession) volatility (2010-2017) rather than the long run volatility (1950-2017) which is represented by the red band.

[4] This on volatility regimes is also relevant.

Thursday, October 25, 2018

Keen

Because it comes up from time to time, let me collect my various criticisms of Steve Keen's work in one place with short summaries. I've been critical of his claims and approaches on this blog for years. People keep linking me to his nonsense, so I thought I'd create a reference post for why I think he's talking nonsense.

TL;DR Keen seems to thrive in a niche in heterodox econ where his fans aren't technically savvy enough to realize what he says doesn't make any sense, but he includes enough political polemic, name dropping, and chumminess with other heterodox "schools" that he can construe any attack on his claims as politically motivated or expect those "schools" to rise to his defense. Failing that, he says that people don't understand what he's saying. As I'm practically a Marxist (and even pro-Minksy) and am well-trained in mathematics and model-building, I'm in a pretty good place to push back against this nonsense. I was also what is called a phenomenologist in theoretical physics — I connected theory to data — which gives me particular expertise understanding the connected roles of theory and empirical data. This last element is a major failure in both mainstream and heterodox economics. Keen was trained as an economist, so that's probably why he's so bad at determining whether his models represent any kind of empirical reality — or even a "realistic" starting point. 

...

May 2018

Dirk Bezemer wrote a paper where he claims that heterodox economics predicted the global financial crisis. Almost all of his references are quotes either taken out of context or completely fabricated. I stand by my assessment, but have published Bezemer's response to let people judge for themselves. Regardless, Steve Keen's website cites Bezemer's paper. However, the reference for that "prediction" was not a prediction of a global financial crisis, but rather an Australian housing crisis which has never appeared (with Keen in fact losing a bet over it). Bezemer's defense against my charge of fabrication only says that Australia made a policy change but clearly says the prediction was about Australia. As far as I know, Keen has never said in public that he never predicted the global financial crisis — and he should disavow such claims when made by his proponents.

Update: Oh, jeez. Keen himself [pdf] cites Bezemer while simultaneously claiming he has a "model" that predicted the global financial crisis. This is really bad.

September 2017

Keen doesn't understand the second law of thermodynamics and claims that log-linear regression is "false" if it is called a Cobb-Douglas function. He just sounds stupid. I go on to produce evidence that might support one of his claims (because he apparently forgot what supporting evidence looks like).

August 2017

Keen smugly derides an economist using "the word 'complex' while clearly not understanding its modern meaning". He then says "it's maths semantics by the way, not economics. Look it up." However, Keen often refers to his models as "complex dynamical systems" or "complex systems" when they are in fact just called dynamical systems. Look it up. Just because you have a nonlinear set of differential equations doesn't mean you have e.g. a complex adaptive system (Paul Cilliers even says of complex system that "conventional means (e.g. a system of differential equations) not only become[s] impractical, [but] cease[s] to assist in any understanding of the system.").

April 2017

This is just a footnote where I talked about seeing Boom Bust Boom (2015) — I'll just quote it: [The movie] brought on Steve Keen for a second to mention money and debt. However, later on they had someone else say that economics shouldn't be approached like a branch of theoretical physics. If I had to pick an economist who used the most inappropriate physics models, it would be Steve Keen who treats the economy like it's a nonlinear electronic circuit. It's just a very odd juxtaposition.

February 2017

So many of Keen's defenders tell me that his models are just qualitative or toy models, so they won't get the data right. If that's how it is, I wonder how these same people (and Keen himself) can be so down on DSGE models which are actually much better at getting the qualitative behavior right. Regardless, Keen's models are qualitatively accurate in the same way rocks are qualitatively food. This might be an area where Keen and his defenders aren't technically savvy enough to understand what the qualitative features of model outputs actually are. This also appears to be a general failure in economics (here's me saying the similar things about DSGE models, for example).

October 2016

Keen says that Kocherlakota's note represents a defense of his (Keen's) approach, but Kocherlakota's note explicitly says the case where a worse model empirically might be taken more seriously is when that worse model has better microfoundations (microeconomics). Keen's models have no microfoundations (nor even a basis in microeconometrics), and he has often criticized the idea or held microfoundations responsible for DSGE models (e.g. here or citing Solow on DSGE here). Keen doesn't seem pass basic reading comprehension in this case.

October 2016

This post was an elaboration on a post by Roger Farmer about how we can't really tell the difference empirically between nonlinear models and linear models with stochastic shocks. As Keen's models don't look anything like the empirical data (even qualitatively, see above), this is kind of moot. So really, Keen's insistence on nonlinear dynamical systems is based on nothing — you can't tell the difference between it and other approaches, and it doesn't have the benefit of being a good model of the empirical data in the first place.

February 2016

This one really pissed me off. Keen is referencing mathematics lots of other people don't really understand in a way that's not really appropriate to the economic system to claim that mainstream economists are full of it and that capitalism is mathematically proven (ha!) to be unstable. It pissed me off because either Keen understands the math and is basically deceiving people who don't know any better, or doesn't understand it and engaging in a bit of, um, rectally disseminated speech. Keen's claim is like saying all buildings are unstable and will collapse soon by asserting a definition of "building" that can only be a literal house of cards. Either he knows what he's doing and is a snake oil salesman, or doesn't know and should really just go back to school. 

December 2015

I started off my earliest blog post referencing Keen by saying "I'm not sure I understand the allure of Steve Keen." Keen's models are equivalent to nonlinear circuits in electronics, and as a person who has built a couple over my lifetime I am aware just how easy it is for noise or small deviations in e.g. resistor specs to completely ruin the chaotic limit cycles (the things that Keen likens to the business cycle). These circuits oscillate in only narrow ranges of component values (i.e. model parameters). And that fine-tuning problem comes before you even get to issues of the Lucas critique (why should parameters stay in the finely-tuned areas of phase space where the system exhibits a chaotic or nonlinear result for several decades). Instead of thinking of chaotic limit cycles as the result of a rickety, jury-rigged, non-deterministic underlying system (the image most people probably have in their mind when they think of chaos), they're actually the result of a finely-tuned deterministic system. It's more like the Newtonian clockwork universe (which, remember, includes the chaotic three-body problem) than the stochastic uncertainty of real economic systems.

...

I also wanted to note another author — J.W. Mason, a professor at CUNY, a fellow at the lefty Roosevelt Institute, and writes for Jacobin — that I think captured the essence so succinctly that I've referenced it multiple times. I'll just quote from it because I don't think I could possibly do better. 

J.W. Mason, April 2012
... if your idea is just that there is some important connection between A and B and C, the equation A = B + C is not a good way of saying it. 
Honestly, it sometimes feels as though Steve Keen read a bunch of Minsky and Schumpeter and realized that the pace of credit creation plays a big part in the evolution of GDP. So he decided to theorize that relationship by writing, credit squiggly GDP. And when you try to find out what exactly is meant by squiggly, what you get are speeches about how orthodox economics ignores the role of the banking system. 
Keen is taken seriously by serious people. He’s presenting this paper at the big INET conference in Berlin next week. It’s not OK that he writes in a way that makes it impossible to understand or evaluate his ideas. For better or worse, we in the world of unconventional economics cannot rely on the usual professional gatekeepers. So we have a special duty to police each other’s work, not of course for ideology, but for meeting basic standards of logic and evidence. There are very important arguments in Schumpeter, Minsky, etc. about the role of the financial system in capitalism, which mainstream economics has downplayed, just as Keen says. And he may well have something important to add to those arguments. But until he writes in a language spoken by people other than himself, there’s no way to know.
Whereas J.W. Mason points out that Keen's prose is opaque, I am pointing out in the list above that the modeling strategies and mathematics (i.e. the equations) are largely unjustified or inappropriate — not math errors per se (maybe, I haven't checked), but using math without tight connections to the claims about the system. Bad math plus opaque language is not a recipe for progress.

Wednesday, October 24, 2018

New Zealand's 2% inflation target

Paul Volcker has an article on Bloomberg about the 2% inflation target. Now I don't have any particular problem with arguing that central banks should focus on more than a numerical inflation target (the main idea of the rest of the article), but Volcker tells a brief story that is part of the whole "central banks controlling inflation" narrative that doesn't appear to be well-supported by the data.

Here's Volcker:
I think I know the origin [of the 2% inflation target]. It’s not a matter of theory or of deep empirical studies. Just a very practical decision in a far-away place. 
New Zealand is a small country, known among other things for excellent trout fishing. So, as I left the Federal Reserve in 1987, I happily accepted an invitation to visit. It turns out I was there, in one respect, under false pretenses. Getting off the plane in Auckland, I learned the fishing season was closed. I could have left my fly rods at home. 
In other respects, the visit was fascinating. New Zealand economic policy was undergoing radical change. Years of high inflation, slow growth, and increasing foreign debt culminated in a sharp swing toward support for free markets and a strong attack on inflation led by the traditionally left-wing Labour Party. 
The changes included narrowing the central bank’s focus to a single goal: bringing the inflation rate down to a predetermined target. The new government set an annual inflation rate of zero to 2 percent as the central bank’s key objective. The simplicity of the target was seen as part of its appeal — no excuses, no hedging about, one policy, one instrument. Within a year or so the inflation rate fell to about 2 percent.
The issue is that — using the dynamic information equilibrium model [DIEM] — inflation was already headed in that direction and it and the price level could have been forecast through 2018 (!) reasonably well back in 1983 (!) using only data available at the time (click to enlarge):


The forecast was made using data before 1983. The dashed red line is the post-1983 model and the green is the post-1983 data. That conference Volcker attended was in 1987, and the inflation target wasn't adopted until 1989.

The main feature of the data is the large shock centered at 1978.7, much like similar shocks in the UK and the US (which by the way didn't adopt inflation targets, and which also saw their inflation rate fall to some approximately constant level by the 1990s). The source of these shocks lasting from the 1960s to the 1990s seems to be demographic (women entering the workforce) in most Anglophone countries [1], so I wouldn't be surprised if it was demographic in New Zealand as well (unfortunately little good data going back far enough exists).

So Volcker's story is a bit like the fire brigade showing up after almost everyone has left the building and congratulating themselves for their good job saving lives. This is similar to the problematic causality around the 1980s recessions — often associated with the Volcker Fed. I'm not saying he's nefariously claiming credit for things — the interpretation is not completely implausible, and in fact most economists (even recent Nobel prize winners) subscribe to it. It's just difficult to square with the data. If you could forecast inflation today from 1983, it's difficult (but not impossible) to conclude events in 1987 had little impact.

...

Update 29 October 2018

Nick Rowe in comments here mentions Canada's target:
How to test the effect of money on inflation? One example: in 1992(?) [ed. 1991] the Bank of Canada said it was going to use monetary policy to bring inflation down to 2%, and keep it there. And that is (roughly) what happened. Either the Bank of Canada got very lucky, or else monetary policy worked in (roughly) the way the Bank of Canada thought it worked.
We can actually play the same game as we played above to show that a forecast from 1985 gets the present day price level (CPI) to within about 1.4% (102.7 predicted versus 104.1 actual) over the course of 33 years (click to enlarge):


Nick says that unless the model is accurate, the Bank of Canada must have been "very lucky" to get "about 2%" right. But there are two issues: 1) what is "about 2%" (the actual dynamic equilibrium appears closer to 1.7% with 1.6% estimated from pre-1985 data so "about 2%" can mean up to a 50 basis point error), and 2) the data before the 70s surge in inflation was "about 2%". I don't have access to the Bank of Canada deliberations, but it seems unlikely that the choice of the 2% target was made without any consideration of this data before 1970. In fact, that's exactly the data the dynamic information equilibrium model keys in on to obtain the 1.6% estimate.

Since you can then forecast 2018's CPI using data from before 1985, it is hard to argue that setting the target in 1991 must have had an effect.

...

Footnotes:

[1] Data for several countries, with the US, Canada and UK showing the demographic shift (click to enlarge):


Monday, October 22, 2018

Let's not assume that

I accidentally set off several threads when I tweeted that maybe empirical evidence should guide what we think about the economy rather than pronouncements about what money is. One of the shorter sub-threads (and pretty much the only one I understood what people were talking about — j/k) included Nick Rowe and his old post on (the lack of) evidence favoring of fiscal or monetary policy. He's great because regardless of what you think about the ideas behind the toy models he builds or parables he tells, they're remarkably clear illustrations of the ideas. I'd recommend even the most hardened heterodox MMTer read his blog (here's a good one on stock-flow consistency).

The summary of Rowe's post is that if you have a fiscal or monetary authority (government, central bank) that targets some some variable it can affect — possibly imperfectly — under the assumption of rational expectations, then there'd be little evidence that the instrument used to target that variable had any effect. The fluctuations in the instrument or target variable are going to be the authority's uncorrelated forecast errors. It's "Milton Friedman's thermostat" (also well-explained by Nick Rowe in another post using an analogy with a driver on hilly terrain). The conclusion is that you should expect little evidence that fiscal and/or monetary policy works even if it does.

I’m pretty sure that it was JK Galbraith (with an outside chance that it was Bhagwati) who noted that there is one and only one successful tactic to use, should you happen to get into an argument with Milton Friedman about economics. That is, you listen out for the words “Let us assume” or “Let’s suppose” and immediately jump in and say “No, let’s not assume that”.
If assuming that a central bank with rational expectations stabilizes the economy will produce no evidence that a central bank with rational expectations stabilizes the economy, then what we have is effectively unfalsifiable (in the useful sense of Popper).

Let's not assume that, then.

What use is it to make these assumptions? They essentially prevent learning things about the economy. In fact, the most useful thing to do in this case — even if those assumptions are true — is to assume the opposite: that central banks (or fiscal policy) has no effect on the macroeconomy. Incidentally, this would produce exactly the same observation of a lack of correlation between the authority's inputs and the target variable output. In the worst case, at least you'd learn that you were wrong if the assumptions were actually true. And if you discovered robust empirical regularities about e.g. fiscal policy mitigating unemployment, then you'd learn that those assumptions of rational expectations and policy effectiveness are wrong in some way. It's a win-win.

You as the theorist should endeavor to maximize the ways in which you can be wrong through observations because that's how we learn [1]. If your preferred framework makes it impossible for data to shed light on it, then the best evidence you can provide is to assume the opposite and show how it fails to capture the data. These frameworks run the gamut from specific mathematical assumptions to more philosophical ones, but they have a single purpose: protecting beliefs from data. If this isn't your aim, the best course of action still would be to lean over backward against this bias [2] and seek out how you might be proven wrong.

...

Footnotes:

[1] High energy physics (the particles and string theory stuff that people often think of as "physics" in a similar way to the way people think of macroeconomics as "economics") has been thought to be in a kind of existential crisis because it is too good at explaining observations — there's no place high energy physics is wrong, so we can't learn anything new.

[2] It might just perceived as bias by others, but that's the breaks. If we think you're biased to adopt a framework that lets you keep your beliefs by escaping comparison with the data then it's unfortunately on you to disavow us of this belief.

Thursday, October 18, 2018

Limits to wage growth

It started off with a simple observation prompted by a Twitter thread: since wage growth tends to increase between recessions (i.e. wages accelerate) in the dynamic information equilibrium model (DIEM) while NGDP growth appears to be roughly constant in the absence of an asset bubble or major demographic shift (and especially in the post-Great Recession period), at some point wage growth would exceed NGDP growth. What happens then?

There are a couple of things that could happen:
  1. Additional consumption by people with higher wages can spur nominal growth (due to wage-led real growth or wage-price spiral)
  2. Investment declines as wages eat into profits (e.g. the Marxist view), prompting a recession
There are other theoretical treatments of this scenario, and all of them seem plausible. My question was more about what the data says. I set about combining the wage growth DIEM (green) and the NGDP DIEM (blue) [1] onto a single graph. The result shows that since the 1980s, when wage growth hit NGDP growth, we got a recession. There's even a hint that the same thing happened in the 1980s based on other data (FRED, larger green dots). Click to enlarge:


The wage growth data from the Atlanta Fed is in green (small green dots), while the NGDP growth data from the BEA is in blue (blue dots). The asset bubbles and crashes (dot-com, housing) are shown as dotted blue lines, but the main trend of NGDP during the fading demographic growth surge is shown as the thick blue line. The former don't show up very strongly in the labor force, while the latter does — that's why I think the trend is more relevant.

It is possible that rising wages in the 1990s led to the increased NGDP growth (wage-led growth). However, it is also possible that the asset bubble (dot-com) allowed wages to rise a bit more above the NGDP trend than they would have otherwise. What is interesting is that the "housing bust" happens a bit earlier than the 2008 recession — which doesn't actually happen until wage growth reaches NGDP growth.

If we project wage growth and NGDP growth using the models, we find that they cross-over in the 2019-2020 time frame. Actually, the exact cross-over is 2019.8 (October 2019) which not only eerily puts it in October (when a lot of market crashes happen in the US) but also is close to the 2019.7 value estimated for yield curve inversion based on extrapolating the path of interest rates. I put in a counterfactual recession in wage growth to show what it might look like.

In any case, this provides a test: will NGDP growth increase (wage-led growth), or will we get a recession due to limits to wage growth? Or will neither of these happen — and the models turn out to be wrong?

One other thing to note: this would be almost completely unobservable without the dynamic information equilibrium model and the low noise wage growth data from the Atlanta Fed. NGDP growth is extremely noisy, and other measures of wage growth are much more uncertain (ECI, or the aforementioned national income). However, extracting the trends of the data using the DIEM allows this pattern to emerge.

...

Update 31 October 2018

I got a great question on Twitter from Richard Clayton:
Would love to read your take on the "expansions don't die of old age" argument, as presented here ... Seems to me your Limits to Wage Growth and constant-rate-of-decline in [unemployment] make for a counter[argument] 
(Some slight editing and adding links because it was from Twitter.)

The original paper from Diebold is from 1992 [pdf]. It finds that you can't reject the hypothesis that expansions don't die of old age (if I've read it correctly). The post above finds an interesting coincidence that recessions typically happen around the time that wage growth rises to be comparable to NGDP growth. I speculated that this might be a causal factor — e.g. wages start to eat into profits, causing companies to cut back on investment. But that also seems to raise the question of how that can be consistent with no particular evidence for expansions "dying of old age". That is to say, if wage growth steadily rises toward NGDP growth, then the risk of a recession should increase with time.

If you look at the methodology of the 1992 paper, you can see that it looks at expansion duration only. In the speculation in the post above, the risk of recession would rise with duration, but the likelihood would decrease depending on the size of the previous recession. If we assume a simple GDP growth model (constant growth), then you can see in the diagram that recessions of different magnitudes increase or decrease the expansion duration:


The further wages are driven down after a recession, the longer the expansion duration, ceteris paribus. If you add in the slowly decreasing rate of NGDP growth, bubbles like the dot-com bubble (which may have extended the duration of the 90s expansion), as well as positive shocks to wage growth (in 2015), you get a much more complicated picture which would make it impossible to derive the hazard function from expansion duration alone.

This does not mean the limits to wage growth hypothesis is correct — to test that hypothesis, we'll have to see the path of wage growth and NGDP growth through the next recession. This hypothesis predicts a recession in the next couple years (roughly 2020). There does not appear to be a bubble affecting NGDP growth, so the possible factor affecting the 2001 recession timing will not apply. If there is no recession and wage growth rises above NGDP growth, then we can probably reject it (on the basis of usefulness in understanding the economy, not statistical rejection which is a higher bar to clear).

...

Footnotes:

[1] Here are the wage growth and NGDP DIEMs compared to data:



The CBO forecasts unemployment (and so do I)


The Congressional Budget Office (CBO) forecast the unemployment rate over the next ten years back in April 2018. Their model (blue dashed line segments) is a pretty standard "natural rate"-like (equilibrium rate) model where unemployment has some non-zero equilibrium level (here, about 4.8%) it would eventually reach (shown in the graph above). However, nothing analogous to the path they propose nor the equilibrium level it sustains has ever been observed in US unemployment data over any 10-year period.

Of course, the path from the dynamic information equilibrium model (DIEM, gray bands) over the same period (conditional on no recession) has also never been observed — at least for that length of time. It has been observed over e.g. the past 10 years, but the additional 10 years would make it a twenty year continuous decline in unemployment. This seems unlikely, but then Australia has done it with only a few blips — all much smaller than US recessions (unemployment rose about 1.5 percentage points during the Global Financial Crisis).

However, that unbroken decline would also make it a 20-year period without a recession, only seen in a couple countries (like the aforementioned Australia). Therefore I added a few possible counterfactual recession scenarios (gray dashed lines) to compare to the CBO forecast. Two of the scenarios have a height (severity) and width (measuring the steepness of the unemployment increase) taken from the average of the post-war recessions (7.9%, and rising over ~ 4 months, respectively). These two have different onsets: the first turns around during 2019 just like the CBO forecast, and the second takes off when the CBO forecast rises above the DIEM forecast. A third counterfactual matches the rise of the unemployment rate in the CBO forecast along with the height. This third counterfactual is effectively the recession that the CBO is forecasting from the standpoint of the DIEM.

One benefit of the DIEM is that it forecasts paths of unemployment that have observational precedent. A drawback to this is that if unemployment begins to exhibit behavior that has never been seen (like remaining constant for almost 8 years), it is unlikely the DIEM will be able to follow along. This makes the DIEM falsifiable, unlike the equilibrium rate models. Equilibrium rate models only have to claim that a recession intervened, and that unemployment will reach its equilibrium in another 10 years. But then, what's the use of an equilibrium rate that is never observed?

...

PS I'm not impugning the work of the CBO, which is tasked to forecast based on traditional understanding of the macroeconomy. However, most economists seem to only forecast a few quarters into the future (understandable) and the oddity of the traditional understanding (especially its lack of precedent in empirical data) only comes out over longer horizons. The Fed typically puts this as a vague "longer run" column in their projection materials [pdf]. The CBO forecast is one of the few to show what this looks like explicitly — in a sense, it's more honest.

Wednesday, October 17, 2018

Labor force participation and unemployment

I think sufficient evidence has accumulated to say that there was likely a positive shock to prime age labor force participation rate (LFPR) in 2016 associated with the shock lowering the unemployment rate (U) in 2014. I posited the existence of this shock based on the observed relationship between LFPR and the unemployment rate despite the limited evidence in the LFPR data itself. Much like how the similar shock structure of JOLTS hires, wage growth, and the unemployment rate imply a relationship that could be used for forecasting, the downward shift in unemployment in 2014 forecast such an upward shift in labor force participation some time later.

As we can see by comparing the models with and without the shock, it's definitely an improvement [1] (click to enlarge):


The relationship between LFPR and U implies a Beveridge-like curve — however it would be one that is completely obscured by the long duration of shocks to LFPR (it reacts slowly, while U reacts quickly with greater magnitude). The recent data remains consistent with the predicted relationship:


...

Footnotes:

[1] Of course, the data is still consistent with a somewhat higher value for the dynamic information equilibrium slope:


The next few months should allow us to distinguish between these models (as data will being to fall below the forecast relatively soon if the actual slope is lower).

Tuesday, October 16, 2018

Are consumption, income, and GDP different measures?

I read this great blog post by Beatrice Cherrier on macro modeling, and I plan on having more to say about it in the future. However, there was an example of discourse on modeling consumption and income that made me wonder: What is the relationship between consumption and income? Does income drive consumption? I used the idea here — that dynamic information equilibrium models (DIEMs) with comparable shock structure are related — to take a look at Personal Income, Personal Consumption Expenditures, and Nominal GDP (FRED series PI, PCE, and GDP, respectively). But the best I can conclude is that these data series represent the same information, and it is likely the differences are entirely measurement errors (questions of e.g. what is treated as income versus what agents think of as income). It's either that, or there's no fixed relationship — sometimes increased income drives consumption, sometimes increased consumption drives income.

Here are the DIEMs for the three data series — they consist of the demographic shock (increasing labor force participation by women) of the 60s and 70s and the boom-bust-boom-bust cycle of the dot-com and housing bubbles. There is a residual "business cycle" element on top of the demographic shift that I will discuss later. PCE is red, PI is purple, and GDP is turquoise (click to enlarge).


As far as can be gleaned from the data, the demographic shock as well as the 2001 and 2008 recessions are effectively simultaneous (the "asset bubble era"). The dot com asset bubble has income precede consumption and the housing asset bubble has consumption precede income (they both look statistically significant based on the errors estimates of the shock centers). If we look at the residual "business cycle" (the "Phillips curve era") after extracting the demographic shock, the measures are all over the place in terms of causality (aside from simultaneously falling during recessions):


The bottom line is that it seems more likely that the various discrepancies could be accounted for by measurement differences than, say, a nonlinear and complex relationship between consumption and income that fails to be measurable at this level of fidelity. True, it's Occam's razor, but the idea that to a good approximation consumption is 68% of NGDP [1] and 78% of income seems both useful and reasonable. Especially given the alternative is an armchair behavioral relationship that couldn't be rejected by data for at least another 100 years.

...

Footnotes:

[1] Actually, consumption is about 60% of NGDP before the demographic shift and rises to 68% after. A similar story is told using wages.

JOLTS day (October 2018)

The Job Openings and Labor Turnover Survey (JOLTS) data for August 2018 was released today (available on FRED), and there aren't a lot of surprises from the dynamic information equilibrium model viewpoint (DIEM, described in detail in my working paper). Even the uptick in JOLTS openings doesn't entirely change the fact that most of the data since 2016 is part of a correlated deviation that could represent the beginnings of a recession at the end of 2019 or beginning of 2020. We'd really need to be seeing an openings rate of 4.9% and higher to discount that possibility. Recession counterfactuals shown as gray bands. As always, click to enlarge.



I'll also be monitoring the "alternate" model of hires (with a lower dynamic equilibrium rate and additional positive shock in 2014) based on a longer time series (discussed here).


Regardless of which model you use, the hires data continued the status quo implying (based on this model of combined DIEMs) that we should continue to see the unemployment rate fall through January of 2019 (5 months from August 2018) and wage growth continue through July 2019 (11 months from August 2018).


Building "models"

Fabio Ghironi asked me about the dynamic information equilibrium models (DIEMs) as models in the economics sense (causal relationships between variables) rather than the physics sense (mathematical descriptions of data) at my talk for the workshop he organized. Much of the work I have been doing is in the latter sense, but I've also put together a few models in the former sense (e.g. a monetary one and an information equilibrium version of the 3-equation New Keynesian DSGE model).

I've been steadily working toward building some models based on the dynamic information equilibrium descriptions of data — I've been collecting useful descriptions of data in "macroeconomic seismograms" as a first step. With the longer hires series and similar shock structure to wage growth, I can show in principle how this kind of model building would progress.

First, one identifies multiple DIEMs with similar shock structure — the one that comes to mind most readily is wage growth, JOLTS hires, and unemployment:


We can perform a log-linear transformation (scaling) along with a temporal translation on each series to map them to each other. I chose to map wage growth and unemployment to the hires DIEM:


This tells us that e.g. the log-amplitude of the shocks to hires are about 0.3 times the size of the log-amplitude shocks to wages and unemployment, but more importantly that hires lead wages by 0.9 year and unemployment by 0.4 year. Basically:

UNRATE(t) = f(HIRES(t − 0.4))
WAGE(t) = g(HIRES(t − 0.9))

where f(.) and g(.) are log-linear transformations of the HIRES data. We could add e.g. Okun's law (see here) and labor-driven inflation (here) and get a description of RGDP, inflation, wage growth, and unemployment rate based on a single input (JOLTS hires). This model is effectively a "quantity theory of labor" model where the economy is driven by hiring.

One thing this model implies that given the hires data that came in last month (data for July), we should expect the unemployment rate to fall for at least another 5 months (from July, so until December) and wage growth to increase for another 11 months (from July, so until June 2019). What's interesting is that this suggests we should start seeing some kind of decline in the hires rate late this year or early next year if the yield curve inversion estimate is accurate. Of course, all of these estimates have an error on the order of 1-2 months.

Monday, October 15, 2018

Wage growth data from the Atlanta Fed

The Atlanta Fed released the latest data in its wage growth tracker, and it's consistent with the dynamic information equilibrium model:




Interest rates and model scope

Along with the market slump last week, long term interest rates fell a bit resulting in a smaller spread. However, the data didn't fall even remotely enough to bring it back in line with the monetary information equilibrium interest rate model:


This model r10y = f(NGDP, M0) essentially says long term (10-year) interest rates are related to nominal output (NGDP) and the monetary base (minus reserves), and it's failing fairly badly as the Fed has increased short term rates (as I've mentioned earlier). In fact, most of the monetary models constructed with the information equilibrium framework have not performed very well.

There's a great story here about a naive scientist — trusting the zeitgeist and the public face of academic economics — building models where output, money, and interest rates were strongly connected, but that failed when compared to data.

However, there might be knowledge to glean from how this model is failing (which may be a failure of scope, not of the underlying principles). Don't read this as a defense of a model that isn't working (trust me, I actually relish the idea of more evidence that money is irrelevant to macroeconomics), but rather a post-mortem on a model that basically has the scope conditions of a DSGE model, as eloquently described by Keynes:
In the long run we are all dead. Economists set themselves too easy, too useless a task, if in tempestuous seasons they can only tell us, that when the storm is long past, the ocean is flat again.
That last bit about the flat ocean is the scope condition: an economy nowhere near a recession. Let me explain ...

I was looking at the model in the first graph above and noticed something in the data. The long rate seems to respond to the short rate — it seems almost repelled by the short rate as it approaches. Where that happens the model error increases. Here's the long rate model (gray) with the long rate data (blue) and short rate data (yellow dashed):



The strongest episodes are the 1970s, the 80s and the 2000s recession. Sure enough, if you plot the model error versus the interest rate spread the error increases as the 10-year rate and the 3-month rate approach each other:


As a spread decline (and eventual yield curve inversion) is indicative of a recession, this makes a pretty good case for limiting the model scope of r10y = f(NGDP, M0) to cases where the 10 year rate is higher than the 3-month rate (r3m). When it is out of scope, the model r10y ~ r3m ~ EFFR is a much better model [1]. That is to say: long term interest rates are the free market price of "money" unless the Fed is rapidly raising short rates (in which case it's a fixed price set by the Fed). This view makes sense intuitively, but also turns forecasting long term interest rates into an occasional game of "guess what the Fed is going to do" with short term interest rates.

...

PS

Here are the latest views of the rate spread (estimated recession onset in late 2019 to early 2020) and the dynamic equilibrium model of the interest rate (using Moody's AAA rate). Click to enlarge.




...

Footnotes:

[1] In fact, it reduces the error by about 10%.

Thursday, October 11, 2018

Consumer Price Index (CPI) forecast performance

The latest CPI data was released today, and is basically in line with the dynamic information equilibrium forecast of inflation I've been tracking since 2017 (click to enlarge):


The dashed line shows a later estimate of the 2014 shock parameters from March of 2018. It has negligible effect on the rate of inflation, but did impact the price level (i.e. the integrated effect on the rate of inflation):


Basically, the shock was a bit smaller than the estimate from early 2017 (which was made while the shock was still underway).

Tuesday, October 9, 2018

Unemployment continues to decline — why?


The unemployment data came out the morning of the workshop at the UW economics department I participated in, so my plot of the unemployment rate in my presentation was out of date by a month's worth of data. Here's the updated plot — the 3.7% unemployment rate falls a bit below the forecast (and there appears to be a general positive bias to the model [1]):


Some of the questions I got at my talk were about the process behind the observation of the constant negative (logarithmic) slope of the unemployment rate outside of a recession. Overall, this seemed to be the empirical observation that contrasted most with the typical view in economics (either some equilibrium rate or something like a natural rate). I knew that it was, and it was part of the reason I chose the labor market as the primary focus of my talk [2]. My answer was some vague hand-waving about the matching function. However, I'll try to answer it a bit more coherently here.

I'll begin with the information equilibrium Cobb-Douglas matching function $M$

$$
H = M(U, V) = c U^{a} V^{b}
$$

where $H$ is JOLTS hires, $V$ is JOLTS vacancies (openings), and $U$ is the level of unemployment (number of unemployed people). Taking the logarithm, we obtain:

$$
\log H = a \log U + b \log V + \log c
$$

Now let's subtract $(a+b) \log L$ (the size of the labor force) from both sides. After some re-arranging, we get:

$$
\frac{1}{a} \log H - \log L - \frac{b}{a} \log V - \log c = \log \frac{U}{L}
$$

The right hand side is the unemployment rate $u$ (ratio of unemployed to the labor force). Taking the time derivative, we get:

$$
\frac{1}{a} \frac{d}{dt} \log H - \frac{d}{dt} \log L - \frac{b}{a} \frac{d}{dt} \log V = \frac{d}{dt} \log u \equiv \alpha
$$

The right hand side is the empirically observed to be a constant rate of decline of the unemployment rate (outside a recession). Since the terms on the left hand side [3] are all positive (increasing total number of job openings, increasing population, increasing total number of hires), we can see that the reason the slope is negative is because of labor force growth and job openings growth — and labor force growth is fairly tightly correlated with economic growth. As I put it in the talk, economic growth and the matching function eat away at the stock of unemployed people over time.

Now $\alpha$ being negative is not a foregone conclusion — the parameters of the matching function and the rate of population growth could be such that the unemployment rate increases (or stays flat) over time. So overall, the  slope of the unemployment rate outside of recessions is a measure of matching efficiency (high (absolute value) slope = efficient, low slope = not efficient).

Interestingly, a look at the data for the unemployment rate by education level finds that the efficiency is highest (and about equal) for people with college degrees or higher as well as high school degrees. It is lower for people with "less than high school", but is lowest for people with "some college". One way to interpret this is that having completed college or high school improves matching (completion serves as an indicator), while not finishing high school or not finishing college makes job matching more difficult (e.g. harder to evaluate than someone who is a high school graduate or a college graduate).

Additionally, matching efficiency by race is actually comparable for black and white people. This does not mean discrimination doesn't exist — just like how companies can heuristically evaluate college graduates with the same "efficiency" as high school graduates doesn't mean high school graduates are paid the same or treated with the same respect in the workforce as college graduates. It'd be better interpreted as companies having a better idea how to match high school graduates with high school graduate jobs — and, in the case of race, black people with "black" jobs. By efficiency, we don't mean an objective "good"; making prejudiced choices is likely faster and cheaper than striving to be unbiased despite being wrong. Higher "efficiency" could mean more discrimination.

...

Update 13 May 2019

I had forgotten I had put together a model of this a few years ago. The first cell is unemployment, and the others are (four) other jobs. Here, the lowest unemployment rate is about 20% (i.e. 1/5 of the jobs with an equilibrium uniform distribution). In the limit of lots of other jobs, the "unemployment sector" becomes a vanishingly small and we could say that u ~ 0% in the long run (as it is in the dynamic equilibrium model since d/dt log u ~ constant). Random transitions from different sectors results in the most likely distribution — a uniform one.



...

Footnotes:

[1] This is likely due to beginning the forecast not just soon after a shock, but also at a point when the data was undergoing a positive fluctuation. Re-fitting the parameters, makes everything fit a bit better — but the recent data is still a bit low:



[2] Not only was this the most empirically successful aspect, but also one that showed a significant contrast with the traditional approaches.

[3] Also, if we assume the matching function has constant returns to scale (i.e. $a + b = 1$, as is empirically plausible per Petrongolo and Pissarides (2001)), we can simplify a bit (where $h$ is the hires rate, and $v$ is the vacancy rate):

$$
\begin{eqnarray}
\frac{1}{a} \frac{d}{dt} \log h - \frac{1-a}{a} \frac{d}{dt} \log v & = & \frac{d}{dt} \log u \equiv \alpha \\
\frac{1}{a} \frac{d}{dt} \log \frac{h}{v} +  \frac{d}{dt} \log v & = & \alpha
\end{eqnarray}
$$

Saturday, October 6, 2018

"Outside the Box" Workshop


Yesterday, I participated in the "Outside the Box" Workshop at the University of Washington Economics department organized by Fabio Ghironi [meeting agenda below]. I had a great time, and there was a lot of enthusiastic engagement from the audience throughout the day. Thank you to Fabio for organizing this — I was grateful for the opportunity.

Here are some links to my talk (let me know if my Google Drive settings are incorrect):


They're both 67 slides (plus a few back-ups) for a 90 minute presentation [1]. The main difference is that the PowerPoint version has animations that function on slides 21, 22, and 56.

There were a couple questions that have been addressed in blog posts in the past (I talked about Christopher Sims' work [here], I've constructed the three equation NK DSGE model using information equilibrium as well as the IS-LM model, and the direction of information flow can actually go either way because of the mathematical properties of information equilibrium [here, here]) which allow you to write A B with IT index k instead as B A with IT index 1/k.

However, the big question that I don't think I answered in a completely satisfactory manner was about where the dynamic equilibrium — the constant (logarithmic) rate of decline of the unemployment rate — came from in terms of the real world. My explanation in terms of the matching function [here, here] was (in my view) incomplete. But it did seem that the idea that the unemployment rate naturally falls until it is pushed up by recessions was new perspective to the audience. I am planning on writing a more extensive blog post about it in the future [now available].

I don't have the slides for the other talks, but they are based on papers from Anup Rao [arXiv], and Val Popov [SSRN]. My talk was based on my recent paper [SSRN].


...

Footnotes:

[1] This may seem like a lot, but a sizable fraction of the slides are actually "build" slides where just a couple lines or pictures change. While there are animations that can be implemented to accomplish this on a single slide, those animations often don't translate to different formats (e.g. pdf). This makes the presentations a bit more portable. As a general rule, I try to aim for 2 minutes per slide with leaving about 10 minutes for questions.