Friday, November 8, 2019

World GDP growth and silly models

In my travels on the internet, I came across this paper (Koppl et al [1]) from almost exactly a year ago. It has the silliest model of the world economy I've ever seen. Here's the abstract:
We use a simple combinatorial model of technological change to explain the Industrial Revolution. The Industrial Revolution was a sudden large improvement in technology, which resulted in significant increases in human wealth and life spans. In our model, technological change is combining or modifying earlier goods to produce new goods. The underlying process, which has been the same for at least 200,000 years, was sure to produce a very long period of relatively slow change followed with probability one by a combinatorial explosion and sudden takeoff. Thus, in our model, after many millennia of relative quiescence in wealth and technology, a combinatorial explosion created the sudden takeoff of the Industrial Revolution.
Caveats about extrapolating that far back notwithstanding, the problem isn’t so much what is written in the abstract but rather that the model cannot support any of the statements in it. Overall, it’s a good lesson (cautionary tale?) in how to go about mathematical modeling.

Just so there are no complaints that I "didn't understand the model", I went and reproduced the results. There's something they kind of gloss over in their paper that I'll come back to later that accounts for the small discrepancies.

First, the population data is basically exactly their graph (I have two different sources that largely match up):


The black dashed line at the end will come back later. Constructing their recursive M function (that represents the combinatorial explosion) and putting it together with the Solow model/Cobb-Douglas production function in the paper allows us to reproduce their graph of world output (GDP in Geary–Khamis international dollars) since the dawn of the Common Era (CE):


Like the population graph above and the M-function graph below, it is also graphed on a linear axis for some reason. They zoom in to 1800-2000 because they want to talk about the Industrial Revolution:


We reproduce this down to the segmented lines drawn between points in the time steps. Although you can't really see it in this graph, this is really part of a continuous curve in the model that goes back to at least the 1600s — it's not the Industrial Revolution (for more on take-off growth, see here or here). A log-log graph helps illustrate it a bit better:



The authors then show their output points alongside a measure of GDP in international dollars. For some reason it’s now points instead of line segments. But at least we’re on a log scale!


I didn’t use the exact same time series for comparison; instead, I used GDP estimates from Brad Delong here [pdf] that I had on hand. However, they're reasonably close to the data they present in their paper. In fact, it’s a bit better fit! I'm doing my best to be charitable. There is an almost exact factor of 4 difference in the level of their data and Delong's, which I think is accounting for “seasonally adjusted annual rate” for quarterly data. Koppl et al actually have two other model fits in their paper with different parameters. I just reproduced the yellow one that was closest to the data  (see the others at the end of this post).

The one graph I’m not reproducing exactly here is their M function. I think they just plotted a version with different parameters than for their yellow model result. As I didn’t care that much, I just did the M function from the yellow model result since that’ll be most germane to our discussion. Like most combinatorial functions, it goes along fairly flat (in linear space) and then jumps up suddenly (again, in linear space):


It starts at M₀, which is 50 in the parameters for their yellow result. The last several numbers in the series are 117.1, 125.3, 136.1, 151.0, 173.7, 213.2, 303.1, 668.8, 9323.4, 326360625.7. The next number is 4.9 × 10^26. Five hundred octillion.

What is supposedly happening in this model is that a current inventory of products (stick, flint, feathers) are at random brought together to produce a new product (an arrow for a bow) with some probability. That new inventory then has elements brought together to craft a new output good. It's basically a Minecraft crafting economy with the number of products you discover increasing combinatorially (roughly on the order of e.g. the gamma function or factorial). The factorials enter through a binomial coefficient.

Combinatorial explosion is building all along, but it really doesn't explain the Industrial Revolution. In fact, you can’t really say this “starts” anywhere with any kind of objective criteria. It starts at M₀, if anything, which is assigned to “year 1” (t = 0) — the beginning of the CE. The location of the super-exponential “take off” point (viewed on a linear scale) is then 60 or so time steps from year 1. But what is a time step? That’s what the authors gloss over. The time is just “scaled” so that the combinatorial series fits in the period from about “year 1” to about the present year.

The time steps turn out to be about 31 years (at least that's what I used), which is remarkably close to a “generation”. But this time scale is a fundamental parameter of their model — telling us where and when the combinatorial explosion occurs. If it had instead been on the order of a quarter, we could go from subsistence to the modern age in about 16 years. Instead whatever combination of current output goods that produces a new product with some probability happens only once every 30 years. You could of course adjust the probability to compensate a change in the time scale — making the probability parameters smaller increases the number of time steps it takes to cover the dynamic range of GDP values. However, since none of these parameters are estimated from some underlying data, the exact location and span of the model result in time is completely arbitrary.

I will pause to note that leaving out time scales like this is a general failing in economics (see here or here), making it impossible to understand the scope of their theoretical models.

The real  problem is when you go to the next time steps (I've also started adding graph labels to the graphs themselves). Combinatorial explosion doesn’t stop once you’ve explained as much of the data you want to explain. It keeps going, and going, and going, and going ...


Of course the GDP data ends so we can't see just how realistic this model is. Remember — their M function is heading towards 10^26 when it's about 600 around the year 2000.

This made me want to use the  the dynamic equilibrium model to extrapolate the data a bit further. In it, we have general exponential growth interrupted by periods of much higher (or lower) growth (“shocks”).

I wrote about population growth and how you might go about modeling it with the dynamic equilibrium model about two years ago with a follow up referencing the well-known 1970s report titled Limits To Growth. The general result there is that the recent population data is consistent with a saturation level of about 12-13 billion people by 2300. That most recent surge in population growth is associated with the advent of modern medicine (others seem to be associated with e.g. the Neolithic revolution in farming or sanitation). Maybe that’s right, maybe that’s wrong. But at least it’s a realistic extrapolation based on a slow decline in world population growth.


I used the world GDP data and population data to create a GDP per capita measure. I then extrapolated that data using another dynamic equilibrium model — one that’s remarkably consistent with the widespread phenomenon of women entering the workforce in larger numbers in the 1950s, 60s and 70s in the world’s largest economies. Again, it’s possible GDP per capita will continue to expand at its current rate for much longer than the next 25 to 50 years, but with growth slowing in most Western countries and even China, it’s entirely possible we’ll see a decline to a rate of growth more consistent with the 1800s than the 1900s.


We can combine our extrapolation of GDP per capita with population to form an extrapolation of world GDP over the next hundred years. The new picture of the longer term output growth shows how silly the combinatorial model is unless we arbitrarily restrict it to the most recent 2000 years.


In 2077 [2], world GDP by this extrapolation is about 513 trillion 1990 Geary–Khamis international dollars instead of the combinatorial version which gives 8.2 duodecillion (10^39) international dollars. We can compare this to world GDP in 2000 which was about 96 trillion international dollars in this data.

An increase by a factor of 5 from the year 2000 is not entirely unreasonable given slowing global growth, but an increase by a factor of a duodecillion (which I had to look up) [3] seems ... um, improbable.

US real GDP grew by a factor of about 10 over 70 years from 1950 to today, but that also includes the period in the middle of the last century where growth was much higher. Plus, the data in the GDP extrapolation also grows by a factor of about 10 over 70 years from 1950 to 2020.

The main take away is that this combinatorial model is both arbitrary in its timing — it's set up to have growth explode after the industrial revolution — but also its scope, being limited to the period from about 1 CE to about 2000 CE [4]. Going a single time step too far gives not just unrealistic but absolutely silly results. The model seems very much like someone (maybe Koppl) had this combinatorial idea (maybe after someone mentioned Minecraft to him) and it was given to a bunch of grad students to figure out how to make it fit the data. Odd parameters, large time steps that result in segmented data graphs, arbitrarily setting terms in sums to zero — it's not a natural evolution of a model towards the data. I saw this in their figure 4 and laughed:


Of course, the default color scheme for Mathematica is instantly recognizable to me (and in part why I tried to reproduce the figures exactly down to the dotted grid lines). But these line segments are all supposed to be aiming for that blue line. None of them are remotely close to even qualitatively explaining the data.

It's not an a priori bad insight for a model — it makes sense! It's kind of a Gary Becker irrational agents meets a Minecraft opportunity set. But combinatorial explosion is just too big to explain GDP, which is much more in the realm of the exponential with varying growth rates. So instead of mathematical modeling, you start building a Rube Goldberg device to make the model output kind of look like the data ... if you squint ... from across the room.

And yet instead of languishing on a grad student's file share or hard drive where it should be, this model ended up LaTeX'd up on the arXiv.

...


Footnotes

[1] It should be noted that Roger Koppl, the lead author, is associated with Mercatus and George Mason University (like one of the other co-authors) with lots of references to Hayek and Austrian economic in any description. Additionally, the paper came up on Marginal Revolution this past week. It should be a huge grain of salt, and in fact this paper is pretty typical of the quality of the work product from GMU-related activities [5].

[2] Chosen due to the time step scale.

[3] This made me think of Graham's number — for a time the largest number that has ever been used for anything practical (in this case it's an upper bound for a graph coloring problem). In part because the Koppl et al GDP is so high itself, but also because like the suspicion of mathematicians that the real answer for Graham's number is about 20, a more realistic estimate of GDP is much, much lower.

[4] There are other choices, such as limiting ourselves to only about 4 items in the combinations that I believe was more a computational limit (my computer has overflow problems if you increase that number or add too many time steps), that basically turn this "model" into a ~10 parameter fit.

[5] The paper goes on a tangent about "grabbing" which is basically a right wing rant:
Our explanation might seem to neglect the important fact of predation, whereby some persons seize (perhaps violently) goods made by others without offering anything in exchange for them. Such “grabbing,” as we may call it, discourages technological change. 
The model put forward has absolutely nothing to do with this and can't explain technological change well enough to warrant speculation about secondary effects like this.

In addition, this is completely ahistorical. Violently seizing others' goods is in fact a major driver of innovation in history — a huge amount of innovation comes in the form of weapons. The silicon chips you're using right now to read this? Needed to make the computations fast enough to accurately guide a nuclear weapon to its target. The basics of computers with vacuum tubes were built to better aim artillery — even physics itself came from this.

Wage growth and belated GDP updates

Wage growth data from the Atlanta Fed came out yesterday and the dynamic information equilibrium model (DIEM) has been doing fairly well for awhile — coming up on two years in a couple months!


J.W. Mason will have to continue to be puzzled. Black is post-forecast data, and click to enlarge. We can also say the DIEM forecast was better than Jan Hatzius' (Goldman Sachs) forecast from this same week one year ago. Orange is the actual average over that time period with one standard deviation errors, compared with Hatzius' range in purple:


Other forecasts where the DIEM is outperforming are RGDP growth and PCE inflation (those belated GDP updates). These cases aren't so much about the center of the prediction, but rather the error band being smaller for the DIEM model and still being accurate.


There's also the FOMC forecast, which is fine but claims a lot more precision in their "central tendency" that must be reflecting something other than RMS error [1]:


And here's the forecast of nominal GDP (NGDP) over employment (PAYEMS or here L) that forms the basis of (the information equilibrium view of) Okun's law and the "quantity theory of labor":



...

Footnotes:

[1] A "central tendency" in the opinions of a group of people is sometimes called "groupthink".

Friday, November 1, 2019

Jobs day: October 2019

The Employment Situation data from BLS was released on FRED today (a.k.a. "Jobs Day") which includes the latest  unemployment rate and the "prime age" (25-54) labor force participation rate data among many other measures. I've emphasized those two particular measures since I've been tracking the performance of the Dynamic Information Equilibrium Model forecast since 2017. And now, almost three years later, they're as accurate as ever (black is the post-forecast data):



For a bit of context, here's a rogues' gallery of forecasts from the Federal Reserve Board of San Francisco (FRBSF), Ray Fair, Nobel laureate Paul Romer (a prediction from 2017 [1]), and Jan Hatzius (Goldman-Sachs) [click to enlarge]:



Additionally, PCE inflation data came out yesterday — it was also in line with the (very boring) DIEM forecast:




...

Footnotes:

[1] Also in the tweet with the unemployment prediction is a horribly wrong labor force participation forecast (the DIEM model of CIVPART was based on the dynamic equilibrium for the prime age participation rate forecast above). Click to enlarge:


Saturday, October 26, 2019

Exploration of an abstract space: prices, money, and ... ships at sea?


AIS data from ships at sea. Credit: Spire Maritime.

I was asked a question on Twitter that I think does help us understand how the information equilibrium framework views prices and money. Of course, it being Twitter, this wasn't exactly asked as a question but rather offered as a condescending retort:
"That guy [i.e. me] is just confused. He doesn't even acknowledge that the price has to be paid. In his model, there is no difference between a price that has to be paid and one that doesn't have to be paid. → there is no concept of truthful revelation."
I do appreciate the fact that he must have read the material because he came away with a conclusion that is in fact true. The implied question is how do I reconcile using information equilibrium to describe not just prices, but also things that have nothing to do with prices as we traditionally think of them.

What follows is an edited and expanded version of my response on Twitter with links.

The issue is that there is absolutely no way, mathematically speaking, for that "truthful revelation" message of paying a price in a single transaction to be communicated through the network. The set of prices simply does not contain the "bandwidth" to carry that information. In mathematical terms, the dimension of the space of price messages is so much smaller than the dimension of the space of information about the transaction. So therefore, neither that "truthful revelation" information nor paying the price could be critical to the functioning of a market. More likely (but still speculative), the price mechanism is destroying huge quantities of irrelevant information via what is called an "information bottleneck" in machine learning.

In fact, what's more important is when a transaction cannot happen. That non-transaction carries so much more information about macro constraints (buyers cannot afford it, do not want it, have a substitution, sellers do not have enough, or it cost more than the current price to manufacture) — mapping out the opportunity set. (Again, maybe the information bottleneck is singling out the lower-dimensional subset of transactions that map the opportunity set.)

A good analogy of what those abstract "tokens" we call money are doing is that it's the same thing ships do in the ocean — they both mediate a transaction and explore an abstract space. In the picture below, we have a bunch of AIS data from ships near the port of Galveston/Houston Ship Canal. Ships generally try to take the shortest, most efficient path between their origin and destinations, but can also travel anywhere the water is deep enough. Sometimes they have to avoid storms, and sometimes they have to follow specific paths — like the well-defined Houston Ship Canal in Galveston Bay.


No one journey maps the world, but a collection of their paths creates an (albeit incomplete) picture of the world. That's the graphic at the top of the post — it's AIS data alone, yet it develops a strikingly good map of the continents. The ships exploring the "opportunity set" of the ocean collectively map out the complex set defined by macro constraints (i.e. continents). That's what money is doing, except it's a more abstract space we can't see.

Or at least that's what money is doing if the information equilibrium picture of economics is correct! Information equilibrium follows from agents fully exploring (i.e. MaxEnt) the the available opportunity set — or as I sometimes put it "state space". Random agents do that, but to a good approximation so do complex intelligent agents where you don't necessarily understand how they make their decisions — the limit of algorithmic complexity is algorithmic randomness. Often people will say that I treat people like mindless atoms, but that's just a useful approximation — and humility! I don't pretend to know how people make complex decisions, so I effectively treat them as so complex as to be random.

We can see that the AIS picture of the continents is incomplete. That's what the framework calls "non-ideal information transfer". It's non-ideal information transfer from the information defining the shape of the continents to the information in the AIS tracking data. I talk about that in more detail in my Evonomics article (which brought me into that Twitter thread) as well as in my talk at UW econ. The key takeaway is that the information transfer framework (which is both information equilibrium and non-ideal information transfer) assumes markets are not necessarily ideal — that the AIS map of the continents is imperfect.

In addition to non-ideal information transfer, there are also non-equilibrium shocks. In the AIS picture, that would be things like embargoes against certain countries or major storms that disrupt shipping. The dynamic information equilibrium model (DIEM) — information equilibrium plus a model of non-equilibrium shocks — is one way to try and model these effects that's remarkably successful in describing e.g. the unemployment rate (tracking it for over two years, and outperforming several other models):


Speaking of the unemployment rate and getting back to the original "question" at the top of the post, what's interesting to me is that the process of exploring the opportunity set is what happens in every "market" even if there aren't "prices" in the usual sense. Or at least where "prices" aren't always the observable data. An example is the job market. The observable "prices" in that case are hires or unemployment — salaries are often not as easily measured as stock market prices. Human agents explore the abstract space of employment opportunities that are in aggregate bounded by macro constraints — even if you can manage to talk your way into an employer hiring you against their initial objections (i.e. influence the local shape of the opportunity set) there are still constraints in the aggregate.

That's why it's more useful to think of prices more abstractly — they represent a transaction where an some amount of A is exchanged for some amount of B. That A can be a job, money, blueberries, or your free time. Mapping the abstract constrained opportunity set with transactions is about information and doesn't care what's doing the mapping or the content of the message — the key insight of information theory. When those things matter, we're back to non-ideal information transfer [1].

That's why "there is no difference between a price that has to be paid and one that doesn't have to be paid" — if an observable represents information about a change in the information content of an opportunity set (a hire, a market price change), then there's economics happening there. Information is flowing — from person to person at the individual level — but the price (even an abstract one like the unemployment rate) is really only seeing changes in information flow.

...

Footnotes:

[1] There's a neat mathematical illustration of this using the chain rule — in fact, we can think of money (or ships!) as a real-world manifestation of a chain rule for an economic derivative. If we exchange A for B, we have an exchange rate "small amount of A" ($dA$) to "small amount of B" ($dB$) or:

$$
\frac{dA}{dB}
$$

... a derivative in calculus. Of course you could exchange A for a small amount of money ($dM$) and money for B:


$$
\frac{dA}{dB} = \frac{dA}{dM} \frac{dM}{dB}
$$

That's just the chain rule in calculus. As long as we maintain information equilibrium between A, B and M, then money doesn't really matter.

As a side note: ships are an example of tokens that go with the flow of the transaction, as opposed to money going in the opposite direction. It's interesting as the direction of exchange for money is basically a sign convention in information equilibrium as I mention in a footnote here that also gets into the discussion of the direction of information flow that came up in some of my earliest posts.


Thursday, October 10, 2019

Wage growth in NY and PA

Without meaning to start an argument, I concurred with Steve Roth and @Promethus_Fire that a minimum wage study by the NY Fed might not have taken into account factors that may have confounded the study in contradiction to J. W. Mason's assertions without evidence that a) border discontinuity automatically controls for them (it does not), and b) economic data is continuous across the NY-PA border (it is not, and I provide several examples that by inspection should give us pause in making that assumption).



Even otherwise arbitrary political boundaries that you might think were transparent to the people living there create weird effects. One example I remember vividly on my many drives between UT Austin and the suburbs of Houston (where I grew up) on US 290 while I was a student was the border between Washington county and Waller county whereupon crossing the Brazos river the road suddenly became terrible. There's no particular reason for this in terms of demographics or geography, but the political boundary meant some completely different funding formula or crony capitalist network at the county level. Something similar happens at the NY-PA border:


On the NY side we have shoulder markings and shoulders that vanish right when you cross the border into PA. It's a tiny difference, but it means more materials and hundreds more labor hours of public spending on the NY side of what is basically the same road. And it's not like people travel into PA never to be heard from again — on this stretch of road traffic is likely balanced in either direction and most certainly isn't discontinuous at this specific point.

Anyway, that was the point I was trying to make. Other things like level of education also vary across this border as well as the PA side being much more likely to have an old-fashioned male breadwinner model of household income. My most recent piece of evidence was that the rate of foreign born residents was higher on the NY side (which looks like New England) than the PA side (which looks like West Virginia).

But then J. W. Mason expressed incredulity at my claim that the wage growth data was relatively smooth. This led me down a rabbit hole where I put together a dynamic information equilibrium model (DIEM) of wage growth on both sides of the border based on the NY Fed data. This data was restricted to leisure and hospitality sectors, but it turns out to be interesting nonetheless. Here's the NY Fed's graphic:


Now I put together the wage growth model at the national level about two years ago. And one of the reasons I went down this rabbit hole was that the Atlanta Fed just released data for September in their wage growth tracker today and I had just compared that data with the forecast:


Pretty good! And it's definitely better than any other forecast of wage growth in the US that's available. If we use this model to describe the NY and PA data, we get a pretty good fit:


There's a single non-equilibrium shock that slows growth that comes right at the beginning of 2012 — coincidentally right when the ARRA deficit spending dries up. There are no other effects and the rest of the path — including all the data through the NY minimum wage increases — is a single smooth growth equilibrium.

How smooth? The smooth model fits the data to within about 2%. It's quantitative evidence J. W. Mason's incredulity was completely unfounded. If we look at these residuals (that are less than 2%), there is a noticeable correlated deviation right during the NY minimum wage increases:


However, this correlated deviation is mirrored in the PA data which means that PA and NY saw the same deviation from smooth growth. There's no meaningful difference between the two that's correlated with the NY minimum wage increases: both saw the same correlated deviation, but more importantly both saw basically wages grow as expected with the deviation from trend growth being less than about 2%. If you forecast in 2010 that average wages would be 10 dollars per hour in 2016, they'd be 10 dollars ± 20 cents.

[Update 13 November 2019] Additionally, that correlated deviation in wage growth matches up the the national level surge in wage growth in 2014-2015 (two figures above). [End update]

It's important to emphasize the part about the lack of differences correlated with the minimum wage hikes — over the entire period, wage growth is not just higher but it increases faster on the NY side. But that's a difference between the NY and PA sides of the border that's persistent through the period 2010-2019.

Does this mean minimum wages are bad? No! In fact, since wages are largely a good proxy for economic output, it means that this shows minimum wages likely have no effect on economic growth. Unlike the naysayers who say minimum wage hikes slow growth or cause unemployment, this aggregate data shows they have no real effect.

Wait, no effect? How can that be good?

Because it's no effect at the aggregate level. At the individual level, earning more money for an hour of minimum wage work is a great benefit since one earns the money faster while allocating a given amount of one's limited time to work. If you don't see any aggregate effects, it basically means minimum wage workers effectively have more free time since they're ostensibly producing the same output for the same total compensation (which they arrive at faster because of the higher wage) — otherwise, there'd be aggregate effects!

If your car gets a boost and now travels 100 mph instead of 70 mph, but you still get from Seattle to Portland in three hours, you must have had spent more time stopped at a rest stop or eating at a restaurant — increased leisure time.

Of course, this is assuming the data is measured properly and these conclusions are correct about no aggregate effects — some studies see net gains from minimum wage increases (i.e. we get from Seattle to Portland in two and a half hours).

Wednesday, October 9, 2019

Calling a recession too early (and incorrectly)

A little over a year ago, I said that the JOLTS Job Openings Rate (JOR) data was indicating a possible recession in the 2019-2020 time frame based on the dynamic information equilibrium model (DIEM). It appears that even if there is a recession in 2020, this "forecast" will not have been accurate. This post is a "post mortem" for that failed forecast looking at various factors that I think provides some interesting insights.

Data revisions

As noted in the forecast itself, there was always the possibility of data revisions — especially in the March data release around the Fed March meeting. The March 2019 revision was actually massive, and affected every single data point in the JOLTS time series ... in particular JOR. It made the previous dip around the time the forecast was made largely vanish.



Leading indicators?

The original reason to look to JOLTS data as a leading indicator was based on the fact that the JOLTS measures seemed to precede the unemployment rate in terms of the non-equilibrium shock locations. In 2008, the hires rate (HIR) seemed to lead with JOR closely following. Closer analysis shows that HIR falls early in part due to construction in the housing bust (which also affected JOR). Now I speculated at the time that the ordering probably changed depending on the details of the recession. In the more recent data, it looks like the quits rate (QUR) might be the actual leader. This would make more sense in terms of a demand driven and uncertainty-based recession where people cut back on spending or future investments (or having children) and so seeing a rough patch ahead might be less inclined to quit a job.

Second order effects!

Recently I noticed a correlation in the fluctuations around the dynamic equilibrium for JOR and the S&P 500. A rising market seems to causes a rise in JOR about a year later. When the forecast was made in 2018, the market rise of 2017 had yet to manifest itself in the JOR data. The "mini-boom" of 2014 along with the precipitous drop of 2016 made it look more like a negative shock was underway.


I should note that these fluctuations are on the order of 10% relative to the original model (i.e. less than a percentage point in estimating the rate), so represent a 10% effect on top of the dynamic equilibrium.

Mis-estimating the dynamic equilibrium

These various factors combined into a bad estimate of the JOR dynamic equilibrium that was much larger (i.e. higher rate) than it appears today. The rate was estimated to be about 25% higher (10.7% versus 8.7%), which meant a persistent fall in JOR relative to the forecast:


I should also note that the entropy minimization procedure (described here as well as in my talk at UW econ) has a much better result (i.e. well-defined minimum) with the additional data:


This did not affect the other JOLTS measures as strongly — and in fact the HIR data has shown little evidence of a "recession", especially since I discovered the longer HIR data series a couple months after the original forecast. The quits data has only recently been showing the beginnings of a deviation from the original 2017 forecast:


While all this is bad for my 2018 recession prediction, it actually means the dynamic equilibrium model was really good at forecasting the data over the past two years.

Saturday, September 14, 2019

Odds and ends from the first half of September

I been really busy these past few weeks, so haven't made many updates to the blog — mostly posting half-thoughts and forecast tracking on twitter. One thing I did post about was a fluctuation in the JOLTS data around the dynamic equilibrium appeared correlated with the S&P 500. I updated it today to emphasize that this is a 2nd order effect — on the order of a few percent deviation from a dynamic equilibrium. I did try out a scheduled tweet that came out just before the unemployment data was released at 8am ET on Friday 6 September 2019 (click to enlarge):


The DIEM forecast got the data exactly right. I also noted in the thread that the DIEM forecast outperforms linear extrapolation — even if you try to choose the domain of data you extrapolate from (the different lines in the second graph show all the different starting points for the extrapolation):


This means that the DIEM is conveying real information about the system.

CPI data came out this week and the DIEM continues to do well there too (continuously compounded and year over year inflation):


One thing to note is that the DIEM model is extremely close to a fit to the pre-forecast and post-forecast data (black dashed) and the non-linearity in the DIEM model (red) actually improves the relative performance:


This means that for a function that is this smooth over time, no other model could be anything more than a marginal improvement. The only possibility of doing better is if the fluctuations around the DIEM path are not noise — and in fact the "cyclic" fluctuations around the DIEM path might be related to the fluctuations around the JOLTS log-linear path:


If you squint, the inflation fluctuations might be in sync with the JOLTS fluctuations:


However, this is fairly uncertain — it's not a robust conclusion at this point.

...

In addition to looking at macro time series, I also took a look at some demographic data about childhood mortality using a new data set. We can see the effect of sanitation in the UK, as well as a potential effect of the more general legalization of abortion:


The data for Japan doesn't go back as far, but shows data consistent with a similar "sanitation transition" (when extrapolated) as well as the effect of WWII:


The US data doesn't go back far enough to make any conclusions (and the shocks are somewhat ambiguous):