Monday, July 25, 2016

Scopes and scales: the present value formula


I'm not entirely sure if this conversation broke off from the discussion of NGDP futures markets, but Nick Rowe put up a post about the difficulties of calculating the present value of currency. This represents another good example of why you need to be careful about scales and scope.

The basic present value formula of a coupon $C$ with interest rate $r$ at time $T$ is

$$
PV(C, r, T) = \frac{C}{(1 + r)^{T}}
$$

First, note that an interest rate is actually a time scale. The units of $r$ are percent per time period, e.g. percent per annum. Therefore we can rewrite $r$ as a time scale $r = 1/\tau$ where $\tau$ has units of time (representing, say, the e-folding time if you think about continuous compounding).

Second, this formula comes from looking at a finite non-zero interest rate over a finite period of time. You can see this because the formula breaks if you decide to take $T$ or $\tau$ to infinity in a different order:


Paul Romer had this problem with Robert Lucas: the limit doesn't converge uniformly. Romer would call taking the limits as $r = 1/\tau \rightarrow 0$ and $T \rightarrow \infty$ in a particular order "mathiness". And I think mathiness here is an indicator that you need to worry about scope. Just think about it -- why would you calculate the "present value" of coupon that had a zero interest rate? It's like figuring out how many stable nuclei decay (see previous link).

The present value formula does not apply to a zero interest rate coupon. It is out of scope.

There are only two sensible limits of the present value formula: $T/\tau = r T \gg 1$ and $T/\tau = r T \ll 1$. This means either $T \rightarrow \infty$ or $\tau \rightarrow \infty$ -- not both. If you want to take both to infinity at the same time, you have to introduce another parameter $a$ because then you can let $T = \tau/a$ and take the limit

$$
\lim_{T \rightarrow \infty} \frac{C}{(1 + a/T)^{T}} = C e^{-a}
$$

The present value can be anything between $C$ and zero. You could introduce other models for $T = T(\tau)$, but that's the point: the present value becomes model dependent. (That's what Nick Rowe does to resolve this "paradox" -- effectively introducing some combination of nominal growth and inflation.)

That also brings up another point: the present value formula doesn't have any explicit model dependence, but it does have an implicit model dependence! It critically depends on being near a macroeconomic equilibrium (David Glasner's macrofoundations). For example, it's possible the value of a corporate bond is closer to zero because there's going to be a major recession where the company defaults. Correlated plans fail to be simultaneously viable, and someone has to take a haircut.

Basically, the scope of the present value formula is near a macroeconomic equilibrium and non-zero interest rates.

Saturday, July 23, 2016

The driving equilibrium

FRED has updated the data on the number of vehicle miles driven, and it looked to me like a perfect candidate for a "growth equilibrium state" analysis using minimum entropy like I did for unemployment (see that post for more details about the process). Here is the original data:


The growth rate bins (note this is not a log-linear graph, so these are not exponential growth rates, but rather linear growth rates) look something like this:


The slope that minimizes the entropy is about α = 0.062 Mmi/y ("dr" is the data list and "DR" is an interpolating function of the data list):


And here is the minimized entropy distribution (i.e. the "spikiest" distribution):


Subtracting that trend and fitting a series of logistic functions (Fermi-Dirac distributions) to the data gives a pretty good fit:


The center of the transitions are at 1974.4, 1980.3, and 2009.5 -- corresponding to the three longest recessions between 1971 and 2016. This results in a pretty good "model" of the number of vehicle miles traveled:


Friday, July 22, 2016

The monetary base continues to fall

The monetary base is continuing its slow fall; I haven't updated this graph in awhile. But first the caveat from that post:
This is probably a sucky prediction anyway since there are only about 6 data points from after the rate hike and the noise (error) has been growing over time. The symmetry argument [that the fall will be at the same rate/curvature as the rise] is doing quite a bit of work here.
Anyway, it's not really too bad ...


It's within the 2-sigma error bands ...


Thursday, July 21, 2016

RGDP and employment equilibria


When I used this estimate of the "macroeconomic information equilibrium" (MIE) to claim that the mid-2000s were a "bubble" (i.e. RGDP was above the MIE), John Handley asked me what the counterfactual employment would be. Let's start with the MIE (gray) versus data (red) and "potential RGDP" from the CBO (via FRED) (black) from this post [1]:


The red line is above the gray line from rouhgly 2004 to 2008 -- that's the "bubble". We can use the information equilibrium relationship P : NGDP ⇄ EMP (EMP = PAYEMS on FRED, total non-farm employees) to say the growth rate of RGDP = NGDP/P is proportional to the growth rate of EMP; therefore the growth rate of the MIE should be the equilibrium growth rate of employment. And it is:


In [1], I noted that the information equilibrium unemployment result was a relationship between the growth rates of employment and MIE RGDP (shown in the picture above) rather than the output gap level and the unemployment rate. Basically, in the information equilibrium model

$$
\text{(1) }\; \frac{d}{dt} \log U = \left( b + a \frac{d}{dt} \log RGDP_{MIE} \right) - \frac{d}{dt} \log EMP
$$

rather than

$$
\text{(2) }\; U = b + a (RGDP - RGDP_{P})
$$

Both of these relationships work pretty well:


However, the derivative in equation (1) basically tells us that we measure the unemployment rate relative to some constant value in the IE model, but we have no idea what that value is -- it's the constant of integration you get from from integrating equation (1) -- but it also doesn't matter. Choose an arbitrary level of unemployment and then we'd say unemployment is "low" or "high" relative to that level. It is similar to the case of temperature -- 200 Kelvin is "cold", but 200 Celsius is "hot" because of the choice of zero point. Picking the average (5.9%) is as good a choice as any:


And so -- unemployment was low during the mid-2000s (well, falling since we are talking about rates). The eras where RGDP grows parallel to the MIE are also "equilibria" where unemployment is "normal" -- early 1960s; late 80s to 1990; the mid-90s; 2001-2004.

As a side note, I'd like to refer back to this post [2] which looked at unemployment equilibria in terms of the rate of decline of unemployment (rate of growth of employment). You can see how this picture basically conforms with the picture above -- the MIE is an equilibrium of growth rates, not levels. Re-fitting the data to the cases of positive employment growth in the figure above, we get this picture:


There is an employment growth equilibrium (blue line) and negative deviations (i.e. a plucking model), which is exactly the model of [2] -- unemployment decline equilibrium and unemployment increases. The model of [2] can be considered to be the approximation to the model here for constant MIE RGDP growth (i.e. unemployment declines of constant slope -- figure below from [2]).


...

Update 22 July 2016

I was a bit premature in the paragraph above after the pair of images that you couldn't figure out a reasonable employment level. It's still a "fit" since the constant of integration information is unavailable, but it doesn't mean you can't create a "zero" for the temperature scale. Here is the employment level where I fit the constant of integration to the data:


You can still shift this curve up and down, but in the same way as you refer to temperature being high or low relative to a scale (absolute zero or water freezing) you can refer to employment being high or low relative to this curve regardless of whether you shift it up or down.

Wednesday, July 20, 2016

Information equilibrium in neuroscience


Todd Zorick and I wrote a neuroscience paper on using information equilibrium to tell the difference between different sleep states with EEG data. The title is "Generalized Information Equilibrium Approaches to EEG Sleep Stage Discrimination" and it was (finally) published today.

This adds to the series of information equilibrium framework applications for complex systems like traffic or simple systems like transistors.

One can think of these distributions as ensembles of "information transfer states" analogous to productivity states, profit states (below), or growth states.


Sunday, July 17, 2016

An ensemble of labor markets


Believe it or not, this post is actually a response of sorts to David Glasner's nice post "What's Wrong with Econ 101" and related to John Handley's request on Twitter for a computation of the implied unemployment rate given an output gap (which I will answer more directly as soon as I find the code that generated the original graph in question). It is also a "new" model, but still fairly stylized. I will start with the partition function approach described in the paper as well as here. However instead of being written in terms of money, I will write it in terms of labor.

Consider aggregate demand as a set of markets with demand $A = \{ a_{i} \}$, labor supply $L = \{ \ell_{i} \}$, information transfer indices (which I will label 'productivity states' for reasons that will become clear later) $p = \{ p_{i} \}$, and a price level $CPI = \{ cpi_{i} \}$. I use the notation:

$$
cpi_{i} : a_{i} \rightleftarrows \ell_{i}
$$

which just represents the differential equation (information equilibrium condition)

$$
cpi_{i} \equiv \frac{da_{i}}{d\ell_{i}} = p_{i} \; \frac{a_{i}}{\ell_{i}}
$$

This has solution

$$
a_{i} \sim \ell_{i}^{p_{i}}
$$

You can see how the $p_{i}$ values are related to 'productivity'; if the growth rate of $\ell_{i}$ is $r_{i}$, then the growth rate of $a_{i}$ is $p_{i} r_{i}$ and the ratio of $\ell_{i}$ to $a_{i}$ (assuming exponential growth for the two variables) is $e^{p_{i}}$.  Let's take the labor market to be a single aggregate (analogously to taking the money supply to be a single aggregate), so that we can drop the subscripts on the $\ell$'s. Define the partition function

$$
Z(\ell) \equiv \sum_{i} e^{-p_{i} \log \ell}
$$

so that the ensemble average of operator $X$ (which will be over 100 markets, i.e. $i = 1 .. 100$ with different values of $p_{i}$ in the Monte Carlo simulations below) is:

$$
\langle X \rangle = \frac{\sum_{i} X \; e^{-p_{i} \log \ell}}{Z(\ell)}
$$

I assumed the productivity states had a normal distribution which results in the following plot of 30 Monte Carlo throws for 100 markets for $\langle \ell^{p}\rangle$:


The ensemble of 'productivity states' $p_{i}$ looks something like this:


This picture illustrates David Glasner's point about general and partial equilibrium analysis in economics 101. The distribution (in black) represents a macroeconomic general equilibrium. Individual firms or markets will move from different productivity states over time. Partial equilibrium analysis looks at individual states that move or have idiosyncratic shocks that do not change the overall distribution. If the distribution changes, you have moved to a different general equilibrium and partial equilibrium analysis will not suffice. That is to say Economics 101 assumes this distribution does not change. (In this model, shocks are represented as productivity shocks, changing the values of $p_{i}$ and/or the average of the distribution in the figure.)

We can fit this ensemble average to the graph of nominal output (NGDP) versus total employees (PAYEMS on FRED, measured in Megajobs [Mj]):


The price level should then be given by the ensemble average $\langle p \ell^{p-1}\rangle$ (switching to linear scale instead of log scale):


The black lines in these graphs represent a single macroeconomic general equilibrium; as you can see, the true economy (blue) fluctuates around that equilibrium. Now the ensemble of individual markets can be represented as a single macro market, but with a (slowly) changing value of the single 'macro' information transfer index $p = \langle p_{i} \rangle$ plotted below:


Note that as the size of the labor market grows, productivity falls. This is analogous to the interpretation of the result for nominal output in terms of money supply: a large economy has more ways of being organized as many low growth states than as a few high growth ones. In thermodynamics, we'd interpret a larger economy (larger money supply or larger labor supply) as a colder economy.

The single macro market with changing $p$ is $CPI : A \rightleftarrows L$, which is basically the information equilibrium relationship that leads to Okun's law (shown in the paper as well as here in terms of hours worked, but labor supply also leads to a decent model, graph from here):


This correspondence means that we should view the difference between equilibrium output and actual output as well as the difference between the equilibrium price level and the actual price level as a measure of the output gap (difference from potential NGDP) and the unemployment rate, respectively ... which works pretty well for such a simple model:



Differences between the model and data could be interpreted as non-ideal information transfer which includes correlations among the labor markets (e.g. coordinating plans that fail together) or deviation from the equilibrium distribution of productivity states. Note that in the output gap calculation, you can see what looks like a plucking model of non-ideal information transfer with a changing equilibrium.

This simple model brings together output and employment, falling inflation and productivity, as well as macroeconomic general equilibrium and microeconomic partial equilibrium in a single coherent framework. It's not perfect empirically, but there isn't much competition out there!

...

Update 19 July 2016

Here's how the distribution of 'productivity' states changes as $\ell$ increases:


Wednesday, July 13, 2016

List of standard economics derived from information equilibrium

Here is a (possibly incomplete, and hopefully growing) list of standard economic results I have derived from information equilibrium. It will serve as a reference post. This does not mean these results are "correct", only that they exist given certain assumptions. For example, the quantity theory of money is only really approximately true if inflation is "high". Another way to say this is that information equilibrium includes these results of standard economics and could reduce to them in certain limits (like how quantum mechanics reduces to Newtonian physics for large objects).

In a sense, this is supposed to serve as an acknowledgement (or evidence) that information equilibrium has a connection to mainstream economics ... and that it's not completely crackpottery.

Supply and demand

http://informationtransfereconomics.blogspot.com/2013/04/supply-and-demand-from-information.html

Price elasticities

http://informationtransfereconomics.blogspot.com/2013/04/the-previous-post-with-more-words-and.html

AD-AS model

http://informationtransfereconomics.blogspot.com/2015/04/what-does-ad-as-model-mean.html

IS-LM

http://informationtransfereconomics.blogspot.com/2013/08/deriving-is-lm-model-from-information.html
http://informationtransfereconomics.blogspot.com/2014/03/the-islm-model-again.html
http://informationtransfereconomics.blogspot.com/2016/02/the-is-lm-model-as-effective-theory-at.html

Quantity theory of money

http://informationtransfereconomics.blogspot.com/2013/07/recovering-quantity-theory-from.html
http://informationtransfereconomics.blogspot.com/2015/05/money-defined-as-information-mediation.html

Cobb-Douglas functions

http://informationtransfereconomics.blogspot.com/2014/05/more-on-cobb-douglas-functions-and.html

Solow growth model

http://informationtransfereconomics.blogspot.com/2014/12/the-information-transfer-solow-growth.html
http://informationtransfereconomics.blogspot.com/2015/05/the-rest-of-solow-model.html
http://informationtransfereconomics.blogspot.com/2015/05/dynamics-of-savings-rate-and-solow-is-lm.html

Gravity models

http://informationtransfereconomics.blogspot.com/2015/09/information-equilibrium-and-gravity.html

Utility maximization

http://informationtransfereconomics.blogspot.com/2015/03/utility-in-information-equilibrium-model.html

Asset pricing equation

http://informationtransfereconomics.blogspot.com/2015/05/the-basic-asset-pricing-equation-as.html

MINIMAC

http://informationtransfereconomics.blogspot.com/2015/06/minimac-as-information-equilibrium-model.html

Mundell-Fleming as Metzler diagram

http://informationtransfereconomics.blogspot.com/2016/06/metzler-diagrams-from-information.html

Diamond-Dybvig

http://informationtransfereconomics.blogspot.com/2015/04/diamond-dybvig-as-maximum-entropy-model.html

"Econ 101" effects of price ceilings or floors

http://informationtransfereconomics.blogspot.com/2016/05/what-happens-when-you-push-on-price.html

"A statistical equilibrium approach to the distribution of profit rates"


That's the title of a paper [pdf] by Gregor Semieniuk (and his co-author) who tried to get a hold of me at the BPE meeting (which I was unfortunately unable to attend, but did make some slides). The paper notes that the distributions of the rates of profit appear to have invariant distributions suggestive of statistical equilibrium; here's a diagram:



This diagram is reminiscent of the diagrams I've used when talking about the distribution of growth states in an economy, most colorfully illustrated at this link (and discussed here and in my paper; note that Gregor's diagram has a log scale for the y-axis).


In fact, it might be directly related. If we have a series of markets (or firms) where income $I_{k}$ is in information equilibrium with expenses $E_{k}$ with "price" $p_{k}$, i.e. $p : I \rightleftarrows E$, in general equilibrium we have

$$
\frac{I_{k}}{I_{k, ref}} = \left( \frac{E_{k}}{E_{k, ref}}\right)^{p_{k}}
$$

so that for $E_{k} \approx E_{k, ref}$

$$
I_{k} \approx I_{k, ref} + p_{k} \frac{I_{k, ref}}{E_{k, ref}} (E_{k}-E_{k, ref})
$$

Therefore $p_{k}$ determines the rate of profit (difference between income and expenses). It determines how much bigger $I$ is given $E$. If $p = 1$, then profit is zero because income equals expenses. If $p > 1$, the firm is profitable because $I > E$; vice versa for $p < 1$. In the partition function approach (also discussed in the paper and the link above), the "prices" represent growth states; here, the "prices" represent profit states. The distribution represents an equilibrium "temperature".

We'd also expect the profit states to have some distribution around a mean value, but have negative profit states to be over-represented due to non-ideal information transfer. This is as observed in the diagram from Gregor's paper. It is also observed in nominal growth data over time** (figure from my paper linked above; also see here):


** This implies a macroeconomy is ergodic: the distribution of temporal states are the same as distributions of ensembles of states.

Monday, July 11, 2016

Ceteris paribus and method of nascent science


I read this from Arnold Kling about models in economics:
... In economics, models are often used for a different purpose. The economist writes down a model in order to demonstrate or clarify the connection between assumptions and conclusions. The typical result is a conclusion that states 
All other things equal [ceteris paribus], if the assumptions of this model hold, then we will observe that when X happens, Y happens. 
... However, suppose that we observe a situation where X happens and Y does not happen. Does that refute the model? I would say that what it refutes is the prefatory clause “[all] other things equal, if the assumptions of this model hold.” That is, we may conclude that [all] other things were not equal or that the assumptions of the model do not hold.
Kling did leave off the [all] which I inserted above. This seems to be not atypical of economists' approach to models. I put up immediate thoughts on Twitter:
"Ceteris paribus" is problematic. All other influences aren't always known, so model fail due to [ceteris paribus] violation tells you exactly nothing.
This lead to discussion with Brennan Peterson (who I discovered recently was a friend of a close friend of mine I knew from grad school -- small world) that brought up some good points. For example, even in natural sciences you don't necessarily know all other influences and therefore you cannot be certain you've isolated the system. Very true.

In fact, I discussed before how natural sciences got lucky in this regard:
I don't think this [summary of Dani Rodrik's view of models] could have been put in a better way to illustrate what is wrong with this view and how lucky scientists turned out to be. Our basic human intuition that effects tend not to pass through barriers and that increased distance from something diminishes its effect turned out to be right in most physical theories of the world. That is to say even without a theoretical framework for how the world worked, our intuition on what reduced the impact of extraneous influence was right. ... Scientists were able to boot-strap themselves into the theory because our physical intuitions matched up with the correct theory.
I'd add here that this is probably not an accident: we evolved (and grew up) dealing with the natural world, therefore our intuitions (and learned behavior) about the natural world should be useful. In that post, I contrasted this with economics where not only do we not know how to isolate the system, but have we observed human cognitive biases with regard to economic decisions (money illusion, endowment effect). In macroeconomics, the idea of being able to isolate monetary policy with theory (i.e. ceteris paribus conditions) is incredibly difficult because we don't really know what that theoretical framework is. Additionally, it could well be that even microeconomic effects that we observe requires us to be near a macroeconomic equilibrium (Glasner's macrofoundations of micro) so the problems with macro infect micro as well.

If you think about it, the scientific method doesn't actually work for situations like this, and it's not just about Hume's uniformity of nature assumption and the problem of induction.

The induction problem is basically the question of whether you can turn a series of observations into a rule that holds for future observations (the temporal version -- will the sun rise tomorrow?), or whether you can turn a series of observations of members of a class into a rule about the class (the "spatial" version -- are all swans white?).

The basic conclusion is that you can't do this logically. From every sunrise, you cannot logically make any conclusion about the next sunrise without either 1) assuming it will because you've made observations that it happens every ~ 24 hours and that's worked out so far or 2) using a theoretical framework that has been useful for other things.

What we are talking about with economics is whether we can even define what a sunrise is because you need that theoretical framework in order to define it, but you couldn't build that framework without basing it on the empirical regularity that is the sunrise. Another way to say this is that you have a chicken or the egg problem: a theoretical framework aggregates past empirical successes, but you need the theoretical framework to understand the empirical data to determine those successes.

With the sunrise, we as humans took route of the Samuel Johnsons and told the effective George Berkeleys saying the sunrise doesn't exist that we refute it thus and pointing emphatically at the really bright thing on the horizon. (Not really, but imagine we invented philosophy and logic before religion which takes the assumption route about the sunrise mentioned above.)

With immaterialism or the problem of induction, I personally would say that "Well, logically, you could take that position ... stuff doesn't exist and you can't extrapolate from a series of sunrises ... but how does that help you? What is it good for?" And I think that's the key point in how you bootstrap yourself from blindly groping the dark for understanding to the scientific method. The scientific method needs a seed from which to start and that seed is a collection of useful facts. Not rigorous, not logical ... just useful. This could be e.g. evolutionary usefulness (survival of the species).

Let's go back to Kling's model. When it fails due to ceteris not being paribus, our lack of a theoretical framework organizing prior empirical successes containing that model, we learn exactly nothing. If the model had been inside a useful theoretical framework, then we'd have learned something about the limits of that theoretical framework. Without it we learn nothing.

We learn that "all other things equal" is false; some things weren't equal. But this doesn't tell us which things weren't equal, only they exist -- completely useless information. We could then take the option of just assuming the model is true, but just not for the situation we encountered.

We should reject Kling's model not because it is rigorously and scientifically rejected by empirical data (it isn't), but because it is useless. Awhile ago, Noah Smith brought up the issue in economics that there are millions of theories and no way to reject them scientifically. And that's true! But I'm fairly sure we can reject most of them for being useless.

"Useless" is a much less rigorous and much broader category than "rejected". It also isn't necessarily a property of a single model on its own. If two independently useful models are completely different but are both consistent with the empirical data, then both models are useless. Because both models exist, they are useless. If one didn't, the other would be useful. My discussion with Brennan touched on this -- specifically the saltwater/freshwater divide in macro. I'm not completely convinced this is an honest academic disagreement (it seems to be political), but let's say it is. Let's say both saltwater and freshwater macro aren't rejected by the empirical data and they both give "useful" policy advice on their own. Ok, well both models are useless because they provide different policy prescriptions and there's no way to say which one is right.

It's kind of like advice that says you could take either option you put forward. It's useless. That's basically the source of Harry Truman's quip about a one-armed economist.

Usefulness is how you bootstrap yourself into doing real science -- there's a scientific method and a scientific method for nascent science. And economics should be considered a nascent science. It is qualitatively different than an established science like physics.

Physics wasn't always an established science; in the 1500s and 1600s it was nascent. The useful things were the heliocentric solar system (it was easier to calculate where planets would be, which we only really cared about for religious and astrological reasons), Galileo's understanding of projectile motion (to aim cannons), and Huygen's optics. Basically: religious and military utility. These were organized with a theoretical framework by Newton and physics as a science, not just a nascent science, started.

In medicine, we had a large collection of useful treatments (e.g. the ideas behind triage developed in the French Revolution), sterilization (pasteurization before pasteur), public health (Cholera in London) before the germ theory of disease. Medicine didn't really become  a science until the 1900s.

In economics, both macro and micro, we probably have a few of the "useful" concepts in our possession. Supply and demand. Okun's law. The quantity theory of money is probably useful in some form (though not necessarily as it exists now). We probably need a few more. I'd personally like to put forward this graph about interest rates as a potential candidate:


However, we should differentiate between nascent science and established science. Established science uses the scientific method and has a philosophy that includes things like falsifiablity (not necessarily Popper's form). Nascent sciences need much more practical -- much more human-focused -- metrics like usefulness.

...

Update 12 July 2016

Good comments from Tom Hickey.

Saturday, June 25, 2016

About that graph ...

There's a graph I've seen around the internet (most recently from Timothy B. Lee on Twitter) that contains the message: "The earlier you abandon the gold standard and start your new deal, the better ..." typed over a graph from Eichengreen (1992):


Looks pretty cut and dried, but I'd never looked at the data myself. So I thought I'd have a go at figuring out what this graph looks like outside of the domain shown. First note that this is a graph of an industrial production index, not NGDP, normalized to 1929. The data was a bit difficult to extract; some is just from FRED, others digitized from the graph itself and a few extra points from Japan come from this interesting article from the BoJ [pdf] about economy during this time period. The results make the case a little less cut and dried (colors roughly correspond):


The graph at the top of this post is shown as the gray dashed line. First, with the other data, it is hard to say anything much is happening in the UK:


The UK actually pegs its interest rate at 2% in the period of the gray bar (which seems to generate inflation), but the overall picture is "meh". However, in that article above, we learn Japan actually started a "three-arrow" approach (similar to the one employed in Abenomics) in 1931 that included pegging its currency to the UK (gray bar in graph below):


Now pegging your currency to another is abandoning the gold standard (at least if the other currency isn't on the gold standard, which the UK didn't leave until later according to the graph -- when it also started pegging its interest rate). Additionally, in 1931, Japan's military build up that would eventually culminate in the Pacific theater of WWII began (remember, this is an industrial production index). Military build up (starting in 1933) and pegged interest rates (gray) could be involved in Germany as well:


We can pretty much leave France out of the conclusion of the graph at the top of the page because there's really no data. That leaves the US:


The pegged interest rates (gray) don't seem to have much to do with the highlighted red segment (see more here about the US in the Great Depression and the Veterans bonus; see also here and here about the period of hyperinflation and price controls), but the New Deal, started in 1933 probably does. It could be the gold standard, but having both happen at the same time makes it hard to extract the exact cause. In fact, you could even say this is just a relatively flat random walk from 1920 until the US military build-up starts in 1939. Without WWII, I could have seen this graph continuing to the right with periods of rising and falling industrial production.

So what looks like a cut and dried case about the gold standard may have much more to do with fiscal policy than monetary policy. In fact, what seemed to happen was that bunch of different policies were tried and we still can't quite extract which one did what -- the answer is likely a muddle of fiscal and monetary policies. And we haven't even gotten to whether industrial production is the right metric.