Wednesday, May 16, 2018

Limits to knowledge of growth

Via Twitter, C Trombley was looking at a model of growth used in a report called "Limits to Growth" [LtG] from the 1970s and a more recent update looking at the forecasts [pdf]. I'm just going to focus on the population growth model because I happened to put one together using the dynamic information equilibrium model last year based on (likely problematic for multiple reasons) estimates of world population since the Neolithic (click to expand):

Let me show a couple of the scenarios in LtG (red, green) alongside the dynamic information equilibrium model (blue dashed) (click to expand):

The blue line is the data used for the dynamic equilibrium model and the black line was the data available to LtG. The dynamic equilibrium model is basically consistent with the two LtG scenarios — except for the presence of non-equilibrium shocks centered in 2065 and 2080 with widths of 55 and 24 years respectively.

Before 2030, the data is essentially log-linear which means there's a big problem. The problem is that that the data required to estimate the future deviations in the LtG model from log-linear growth was not available in the 70s, is not currently available, and won't be available until at least 2030. That is to say we don't have any knowledge of the parameters for the process responsible for those futures. Given we have never observed a human population crash of that magnitude (literally a decline billions of people) happening over those timescales (a few decades), the estimates for the model parameters resulting in those paths are pure speculation [1].

Now you may ask: why doesn't the dynamic equilibrium model also have this problem? As you can see in the top graph of the estimates of human population since the Neolithic, we actually have multiple shocks to validate the approach. But the more important point is that the latest shock estimated was centered in the 1950s and therefore we have more complete knowledge of it. It's true that estimating the magnitude of an incomplete shock may lead to under- or over-shooting. But the model isn't positing a deviation from log-linearity about which all of the information needed to estimate it lies in the distant future.

This isn't to say that the LtG models will be wrong — they could get lucky! The Borg might land and start assimilating the population at a rate of a few million a year (until we develop warp drive in 2063 and begin to fight back) [2]. But you should always be skeptical of models that show we are on the verge of a big change in the future [3].



[1] In fact, looking at the shocks I'd surmise that the LtG model just assumes the population in the 1970s was approximately the "carrying capacity" of the Earth so something must get us back there in the long run.

[2] I loosely based this scenario on Star Trek: First Contact.

[3] I will inevitably get comments from ignorant people: What about climate models? None of these show a qualitative change in behavior and basically just represent different estimates of the rate of temperature increase:

And the policy models just show the effects of different assumptions (not their feasibility or likelihood):

The analogy with the LtG model would be if the LtG model just assumed a particular path for birth/death rates (it does not; in fact, it claims to predict them).


  1. So the shocks centered at 2065 and 2080 were just assumed by the LtG authors?

    1. I couched it in terms of dynamic equilibrium shocks, but their model actually has what you might call restorative forces. However, since these restorative forces hadn't kicked in enough to have affected the population time series by 1970 (and wouldn't in their models until 2030), it would have been impossible to measure the parameters describing those forces. And since there's never been a time in recorded history when the population declined by half or more, there is no other source for that information.

      It's like saying you know about a 100-year cycle in GDP growth or interest rates and plotting a graph of a sin wave that peaked in the 80s, had a trough in 2010s, and then peaked again in 2050. We simply do not have the relevant data to support such a model without it being extremely accurate about something else. But even then, the estimates of the 100-year cycle frequency would be *highly* uncertain.

      It is the equivalent of just making up a shock in the dynamic equilibrium model.

    2. Note that a population decline from 10 billion to 5 billion over 20 years or so implies 10s of millions of people dying (death rates spike in the model ... due to pollution).

      Now I could see a mass anoxic event in the oceans poisoning the entire east coast (like a local version of the P/T extinction), but that's not in the model and we don't have enough data to extrapolate the smaller dead zones to a global catastrophe.

      (Which is why global warming is such a big issue: uncertainty coupled with possible major consequences.)

    3. Yes, while long cycles have an attractive plausibility, based upon things like cultural forgetting and generational opposition, when you get down to data and noise, they end up looking indistinguishable from random walks.


Comments are welcome. Please see the Moderation and comment policy.

Also, try to avoid the use of dollar signs as they interfere with my setup of mathjax. I left it set up that way because I think this is funny for an economics blog. You can use € or £ instead.