Saturday, June 25, 2016

About that graph ...

There's a graph I've seen around the internet (most recently from Timothy B. Lee on Twitter) that contains the message: "The earlier you abandon the gold standard and start your new deal, the better ..." typed over a graph from Eichengreen (1992):

Looks pretty cut and dried, but I'd never looked at the data myself. So I thought I'd have a go at figuring out what this graph looks like outside of the domain shown. First note that this is a graph of an industrial production index, not NGDP, normalized to 1929. The data was a bit difficult to extract; some is just from FRED, others digitized from the graph itself and a few extra points from Japan come from this interesting article from the BoJ [pdf] about economy during this time period. The results make the case a little less cut and dried (colors roughly correspond):

The graph at the top of this post is shown as the gray dashed line. First, with the other data, it is hard to say anything much is happening in the UK:

The UK actually pegs its interest rate at 2% in the period of the gray bar (which seems to generate inflation), but the overall picture is "meh". However, in that article above, we learn Japan actually started a "three-arrow" approach (similar to the one employed in Abenomics) in 1931 that included pegging its currency to the UK (gray bar in graph below):

Now pegging your currency to another is abandoning the gold standard (at least if the other currency isn't on the gold standard, which the UK didn't leave until later according to the graph -- when it also started pegging its interest rate). Additionally, in 1931, Japan's military build up that would eventually culminate in the Pacific theater of WWII began (remember, this is an industrial production index). Military build up (starting in 1933) and pegged interest rates (gray) could be involved in Germany as well:

We can pretty much leave France out of the conclusion of the graph at the top of the page because there's really no data. That leaves the US:

The pegged interest rates (gray) don't seem to have much to do with the highlighted red segment (see more here about the US in the Great Depression and the Veterans bonus; see also here and here about the period of hyperinflation and price controls), but the New Deal, started in 1933 probably does. It could be the gold standard, but having both happen at the same time makes it hard to extract the exact cause. In fact, you could even say this is just a relatively flat random walk from 1920 until the US military build-up starts in 1939. Without WWII, I could have seen this graph continuing to the right with periods of rising and falling industrial production.

So what looks like a cut and dried case about the gold standard may have much more to do with fiscal policy than monetary policy. In fact, what seemed to happen was that bunch of different policies were tried and we still can't quite extract which one did what -- the answer is likely a muddle of fiscal and monetary policies. And we haven't even gotten to whether industrial production is the right metric.

Friday, June 24, 2016

Gronwall's inequality and information transfer

In light of Brexit, non-ideal information transfer is now much more salient. And because of that, I thought it might be a good time to post this information transfer-specific form of Gronwall's inequality that's been languishing as a draft for awhile.


One of the key lemmas I've used liberally to say the solution of a differential inequality is bounded by the solution of the corresponding  differential equation is called Gronwall's inequality. It's useful in stochastic differential equations, among other applications. It is not a general result for all differential equations, but fortunately applies precisely in the case we consider in the information transfer framework. It is written differently in the Wikipedia article linked, but I'd like to show this is just a notational difference. The differential equation that the inequality applies to is 

u'(t) \leq \beta (t) u(t)

This is just the equation

\frac{du(t)}{dt} \leq \beta(t) u(t)

and if $\beta (t) = k/t$, we have

\frac{du(t)}{dt} \leq k \; \frac{u(t)}{t}

and we can select the variables to be whatever we'd like (and take the function arguments to be implied). Therefore, given an information transfer relationship $\alpha \rightarrow \beta$, we can say the solution to the differential inequality:

\frac{d\alpha}{d\beta} \leq k \; \frac{\alpha}{\beta}

is bounded by the corresponding information equilibrium relationship $A \rightleftarrows B$

\frac{dA}{dB} = k \; \frac{A}{B}

with solution

A(B) = A(B_{ref}) \exp \left( k \int_{B_{ref}}^{B} dB' \; f(B') \right)

taking $A(B_{ref}) \equiv A_{ref}$ and integrating $f(B') = 1/B'$

A(B) = A_{ref} \exp k \left( \log B - \log B_{ref} \right)

rearranging (and putting it in the form I usually use)

\frac{A}{A_{ref}} =  \left( \frac{B}{B_{ref}} \right)^{k}


\frac{\alpha}{\alpha_{ref}} \leq  \left( \frac{\beta}{\beta_{ref}} \right)^{k}


\alpha(\beta) \leq \alpha(\beta_{ref}) \exp \left( k \int_{\beta_{ref}}^{\beta} d\beta' \; f(\beta') \right)

Thursday, June 23, 2016

Summer blogging recess (and no-go theorems)

I have a bunch of things to attend to over the next month or so, so there will be few (if any) posts. I plan on working (the real job), taking a couple of trips (work- and non-work-related), thinking about a problem Cameron Murray suggested, and finishing up the first draft of the book (see here; I've been spending some time away from the draft, hoping to come back to it with new eyes).

I will also continue to be baffled by the stock-flow accounting advocates (see comments here and here). It seems more and more to be a paradigm akin to market monetarism to me. Market monetarism exists in order to deny the effectiveness of fiscal stimulus [as a primary policy tool]; Stock-flow accounting exists to deny the existence of any weakly-emergent properties of macroeconomies. Everything they seem to say is inconsistent with this picture:

Not this model, mind you, but the existence of this picture. The weakly-emergent travelling wave cannot exist -- even though this kind of weakly-emergent structure is perfectly consistent with strict accounting of the variables.

All I get in response are attempts to teach me accounting! Trust me -- accounting "theorems" don't quite rise to the level of Stokes' Theorem; this stuff is not complex except in vocabulary. I have degree in math with particular emphasis in topology and differential geometry. The discrete point set on which accounting is defined can't be more complex that the typical topology of the Cantor set. However, the main issue is that "you don't understand accounting" isn't particularly responsive to the question: Why are there no weakly-emergent (e.g. field) quantities in systems that obey accounting principles?

Is there some proof of this? It would be some kind of no-go theorem for certain covering spaces of a space defined by a set of functions on some discrete point set, network, or connected intervals. This is of course not generally true: all of solid state electronics is a counterexample, as well as things like temperature and pressure.

There would have to be something specific about the accounting equations that prevents this. However, accounting operators aren't exactly elliptic differential operators, so I have difficulty imagining a no-go theorem arising from addition and multiplication.

This may sound like I' trying to win an argument by bringing up a lot of high level mathematics, but that's actually what's at issue. Denying weakly emergent quantities is a pretty high level mathematical question about the properties of discrete spaces. I can't imagine it can be answered by double entry bookkeeping.

Monday, June 20, 2016

Metzler diagrams from information equilibrium

Paul Krugman has a post today where he organizes some DSGE model results in a simplified Mundell-Fleming model represented as a Metzler diagram. Let me show you how this can be represented as an information equilibrium (IE) model.

We have interest rates $r_{1}, r_{2}$ in two countries coupled through an exchange rate $e$. Define the interest rate $r_{i}$ to be in information equilibrium with the price of money $M_{i}$ in the respective country (with money demand $D_{i}$) -- this sets up four IE relationships:

r_{1}&  \rightleftarrows p_{1}\\
p_{1} :  D_{1}& \rightleftarrows M_{1}\\
 r_{2}& \rightleftarrows p_{2}\\
p_{2} :  D_{2}& \rightleftarrows M_{2}

This leads to the formulas (see the paper)

\text{(1) }\; r_{i} = \left( k_{i} \frac{D_{i}}{M_{i}}\right)^{c_{i}}

Additionally, exchange rates are basically given as a ratio of the price of money in one country to another:

e \equiv \frac{p_{1}}{p_{2}} = \alpha \frac{M_{1}^{k_{1}-1}}{M_{2}^{k_{2}-1}}

And now we can plot the formula (1) versus $M_{1}^{k_{1}-1}$ (blue) and $M_{2}^{1-k_{2}}$ (yellow) at constant $D_{i}$ (partial equilibrium: assuming demand changes slowly compared to moneytary policy changes). This gives us the Metzler diagram from Krugman's post and everything that goes along with it:

Also, for $k \approx 1$ (liquidity trap conditions), these curves flatten out:

Stock flow accounting with calculus

Commenter Peiya took issue with my comments on stock-flow analysis in the previous post. This post is mostly a response to point (3) that needed a bit more space and mathjax to be properly addressed.

The equation in question is (call stock $S$ and flow $F$, and define a revaluation functional $\alpha = \alpha [S]$):

S(t) = \alpha [ S(t-\Delta t) ] + F(t)

First, let's say $S(t-\Delta t) = 0$ (you're starting from zero), then this equation asserts $S(t) = F(t)$, i..e stock is equal to a flow. This doesn't make sense unit-wise (a flow represents a change in something and therefore has to happen over a period of time, hence there is a time scale associated with the right hand side, but not the left). Per Peiya, $F(t)$ is defined on an interval $[t, t + \Delta t)$, this allows us to define a function $\phi$ (assuming $F \in C^{2}$ except on a countable subset) with the same support such that

F(t) \equiv \int_{t}^{t + \Delta t} dt' \phi(t') = \Phi (t+\Delta t) - \Phi (t) \equiv \Delta \Phi

Thus we can actually see that $F$ is really a stock variable made from a change in stock over some time scale. Assuming a short interval where $\phi$ is approximately constant, we can see the time scale $\Delta t$ pop out:

F(t) \approx \int_{t}^{t + \Delta t} dt' \phi \approx \phi \cdot \Delta t \equiv \Delta \Phi

So how do we treat stock-flow analysis in a way that is consistent with mathematics? Let's start from $t = 0$ in that first equation. We have (taking a constant revaluation $\alpha$ per time period for simplicity, but can be handled in the general case as long as $\alpha$ isn't pathological)

S(\Delta t) = \int_{0}^{\Delta t} dt' \phi_{1}(t')

S(2 \Delta t) = \alpha \int_{0}^{\Delta t} dt' \phi_{1}(t') + \int_{\Delta t}^{2\Delta t} dt' \phi_{2}(t')

S(3 \Delta t) = \alpha^{2} \int_{0}^{\Delta t} dt' \phi_{1}(t') + \alpha \int_{\Delta t}^{2\Delta t} dt' \phi_{2}(t') + \int_{2\Delta t}^{3\Delta t} dt' \phi_{3}(t')

etc, so that (taking $\phi_{k} = 0$ outside the interval $[(k-1) \Delta t, k \Delta t)$, we can write

S(t = n \Delta t) = \int_{0}^{t} dt' \sum_{i = 1}^{n} \alpha^{i-n} \phi_{i}(t')

and defining

\sum_{i = 1}^{n} \alpha^{i-n} \phi_{i}(t') \equiv \tilde{\phi} (t')

we obtain (assuming $S \in C^{2}$ except on a countable subset)

S(t) = \int_{0}^{t} dt' \tilde{\phi} (t') \equiv \Phi (t) - \Phi (0)

We're back to the case where the initial stock was zero. Essentially a change in stock over a time scale ($t$) is equivalent to a flow, and everything I said about scales and metrics and free parameters in this post follows.

I do not understand the resistance to the idea that calculus can handle accounting. There are no definitions of stocks, flows, time intervals or accounting rules that are logically consistent that cannot be represented where a stock is an integral of a flow over a time scale. Attempts to do so just introduce logical inconsistencies (like stocks being equal to flows above).

Saturday, June 18, 2016

Regime dependent modeling and the St. Louis Fed

The STL Fed has made a splash with its new "regime-switching" [pdf] forecasting (H/T commenter eli). Part of the reason is the divergence of one of the dots from the rest of the dots in the FOMC release. Here's Roger Farmer with a critique (who also liked my earlier post/tweet about a falling unemployment equilibrium that is basically the same as his increasing/decreasing states). So here is a diagram of the states the STL Fed categorizes:

Since productivity and the "natural" rate of interest (approximately the STL Fed's r†) aren't necessarily directly observable, one can think of this as setting up a Hidden Markov or Hidden Semi-Markov Model (HSMM) (see Noah Smith here).

I'd like to organize these states in terms of the information equilibrium (IE) framework (in particular, using the DSGE form, see here for an overview). The IE framework is described in my preprint available here (here are some slides that give an introduction as well). What follows is a distillation of results (and a simplified picture of the underlying model). The IE model has two primary states defined by the information transfer index k. If k > 1, then nominal output, price level and interest rates all tend to increase:

If k ~ 1, then you can get slow nominal output growth, low inflation (slower price level growth) and decreasing interest rates:

This organizes (nominal) interest rates into two regimes -- rising (inflation/income effect dominating monetary expansion) and falling (liquidity effect dominating monetary expansion) (graph from link):

This model results in a pretty good description of interest rates over the post-war period (graph from link, see also here):

The two regimes are associated with high inflation and low inflation, respectively (as well as high productivity growth and low productivity growth). In the IE framework, recessions appear on top of these general trends (with worse recessions as you get towards low interest rates). Coordination (agents choosing the same action, like panicking in a stock market crash or laying off employees) can cause output to fall, and there appears to be a connection between these shocks and unemployment spikes. This is a two-state employment equilibrium: rising unemployment (recession/nominal shock) and falling unemployment (graph from link):

Overall, this inverts the direction of the flow of the STL Fed model and we have a picture that looks like this:

Friday, June 17, 2016

What does it mean when we say money flows?

Last night through this morning, I butted into an exchange on Twitter about the flow of money between Steve Roth and Noah Smith (who promptly booted me from whatever strange nonsense he wanted to blather on about ... you can't send a signal with wavefunction collapse and it has nothing to do with money flow [economists: stop talking about quantum mechanics]). It brings up some interesting points. Steve started with the observation that money appears to teleport from one account to another -- there is no "flow" per se. Noah said this was just like water which I assume was an attempt at Ken M style trolling because it makes no sense.

Anyway, here is an animation of dots being instantly debited from one account (box) and credited to another; I know because I wrote the Mathematica code that does it (feel free to email me via the box on the sidebar for a copy):

It's from this post. There is strict accounting (the number of dots is constant, and one dot taken from one box is exactly matched by a dot added to another). Then Steve asked the question that you'd hope to get if you were teaching this stuff as a class:
Which raises the interesting (for me...) Q: when I send you money, how far does it "move"?

This is an interesting question and in the animation above there is an explicit answer: it moves one box in one time step. In real life, if you gave several people a large amount of money, you could measure how fast it moves from one node in the network to another. One bought dinner here; another bought a record there. The restaurant and music store paid their employees the next Thursday, but the wait staff took home their tips that night and one picked up some groceries. It possibly changed forms during this journey from cash from an ATM or a deposit at one. This data would be an input to a bottoms-up measurement of money velocity.

These measurements determine the flow rate of money in an economy analogous to the flow rate of the wave in the animation above. There's another picture from that same post -- it shows the density at a single site:

Note that there's a "decay constant" (of about 500 time steps) -- this decay constant is directly related to the speed of the flow in the first animation. The faster that wave moves in the first animation, the faster the density would decay in the second animation.

I hesitate to bring it up again, but this is exactly what I was talking about when I criticized stock-flow consistent models (Steve also brought up "accounting" approaches).

Even though the accounting is exact in the model above, I could make the wave travel faster or slower (and therefore the decay happen faster or slower) by changing the size of the debits and credits or changing the number of transactions per time step. The velocity of the wave is a free parameter not established by pure accounting. In the linked post, I called that free parameter Γ and was promptly attacked by the stock-flow consistent community for heresy.

So where does this parameter come from? Where does velocity come from? Where does my freedom come from when Nick Rowe says:
But velocity is not just an accounting relationship between flows and stocks. I can choose the velocity of circulation of the money I hold.
Let's consider a set of accounts on a network: I buy something from you, you buy something from someone else, etc. all in a chain (time steps t are vertical, and accounts a are horizontal):

Now, what happens if I do this:

I made the accounts farther apart and the time steps closer together. In terms of accounting? I did nothing. It's just a graphic design choice. That's because these accounting boxes have nothing to do with each other in time and space. However, I've implicitly related them to each other because the boxes "next to" each other are in a sense closer to each other. There is some kind of "connection" between boxes (two are shown in blue):

How do I measure distance diagonally? How is one time step related to one account step? Well, much like how I could make any graphic design choice, I can choose any relationship between a time step Δt and an accounting step Δa (I could even make Δa = Δa(t) change as a function of time -- an expanding accounting universe). There's a free parameter that comes out of this two-dimensional space time (for flat space). In physics, it's called the speed of light (c). The space steps Δx are related to time steps Δt such that distance Δs is measured with some function

Δs² = f(Δx, c Δt) = c² Δt² - Δ

... at least for the those of us on the West Coast.

This is what was so frustrating about the stock-flow argument. The metric was assumed to be analogous to

Δs² = f(Δx, c Δt) = Δt² + Δ

as if it was a fundamental accounting error to assume otherwise. Frequently physicists do assume h = c = 1. But then we measure everything in terms of energy or distance. One Fermi is not just a distance but also a time and both are the same as 1/197 MeV (inverse energy) which is also about 1/400 electron masses. That's only because the theory of relativity exists relating energy, matter, space, and time. You could do this in stock flow models -- assuming a new fundamental constant Γ = 1 dollar/quarter -- but then you'd have to measure time in terms of money. A year is not 4 quarters, but rather 4/dollars.

Stock flow consistent analysis is not the special relativity of economics and there is no such fundamental constant as Γ. The accounting "metric" changes over time (those time steps can seem pretty fast when there's a recession happening).

Thursday, June 16, 2016

New CPI data, and an IE model gets rejected

There is new CPI data out today, so I've updated the lag model. It looks like it's not doing too well. There's the spike in January/February, which might be transient. It's within 3 sigma (0.03%, or one month in 30 years), but it occurs in both PCE and CPI data which makes it more like 4.5 sigma (one month in 12,000 years -- since the last ice age). That's troubling, but another problem is the IE lag model is consistently under-estimating the last 8 measurements of CPI inflation (there's a bias), which is a 0.4% probability event (about 3 sigma) on its own.

I could try to console myself with the fact that the measurement of CPI inflation changed slightly in January 2016 (they started using an arithmetic mean for prescription drug prices, but that's a small component of CPI). However that doesn't fix the consistent bias. I'll still follow this model however -- the data does show periods of persistent over- and under- shooting.

Note that this is just the lagged-GDP model of CPI inflation (that I'd hoped might squeeze a few basis points more accuracy out of the data), not the PCE inflation model that is in a head-to-head competition with NY Fed DSGE model (see here for the list of predictions).

Wednesday, June 15, 2016

Macroeconomists are weird about theory

Employment rate e = 1, e ≈ 1 - u* (natural rate), and data. The "pool player" isn't that far off.

Noah Smith has a problem with Milton Friedman's "pool player analogy", and his post is generally very good so check it out. However I think it is also a good illustration of how weird economists are with theoretical models -- at least to this physicist.

As an aside, biologist David Sloan Wilson cites Noah approvingly, but possibly does not remember that he (Wilson) said that Friedman's "as if" arguments are evolutionary arguments (that I discussed before, which would also be a good post to look at before or after you read this one).

Anyway, let me paraphrase the pool player analogy:
Pool players operate as if they are optimizing physics equations

Economic agents operate as if they are rational utility optimizers
Noah's objection is that either a) you optimize exactly therefore you don't need to know the physics equations, just the locations of the pockets, or b) you don't optimize and therefore there are a lot of things that go into why you'd miss from psychology to nonlinear dynamics. As he says: "Using physics equations to explain pool is either too much work, or not enough."

This is a false dichotomy, and it results from the general lack of scope conditions in economics. Noah says this is an important topic to think about:
I have not seen economists spend much time thinking about domains of applicability (what physicists usually call 'scope conditions'). But it's an important topic to think about.
I'll say!

[Actually from a Google search, "scope condition" seems like a sociology term. I never used it as a physicist -- rather it was a domain of validity or scale of the theory.]

Mathematically, Noah is saying (if p is the probability of getting a ball in the pocket) either

p = 1


p = f(x, y, z, ...)

where f is some function of physics, psychology, biology, etc. This is not what Milton Friedman is saying at all, and isn't how you'd approach this mathematically unless you look at math the weird way many economists do -- as immutable logic rather than as a tool.

Milton Friedman is saying (just keeping one variable for simplicity):

p ≈ 1 + o(x²)

And with pool players in a tournament, that is a better starting point than p ≈ 0. I put o(x²) (here is Wikipedia on order of approximation) because Friedman is saying it is an optimum, so it likely doesn't have any first order corrections. This latter piece may or may not be true, so a more agnostic view would be:

p ≈ 1 + o(x)

So what about the scope conditions? Let's take Noah's example of random inhomogeneities on the balls, the table and in the air. [Update: I want to emphasize that it is the "as if" theory that gives you your hints about how to proceed here. The pool player is operating "as if" he or she is optimizing physics equations, therefore the scales and scope will come from a physics model. This of course can be wrong -- which you should learn when the theory doesn't work. For example, DSGE models can be considered an "as if" theory organizing the effects, but have turned out wrong in the aftermath of the Great Recession.] These have to be measured relative to some size scale S₀ so that we can say:

p ≈ 1 + o(s/S₀)

Now is S₀ big? Maybe. Maybe S₀ represents the table size; in that case the linear term is important for long shots. Maybe S₀ represents the ball size; in that case the linear term is important for shots longer than the ball's diameter. Maybe S₀ is the size of a grain of sand; in that case you might have a highly nonlinear system.

That's what theory is all about! It's all about finding the relevant scales and working with them in formal way. Other uses of mathematical theory are weird.

The second half of Noah's post then asks the question: what if

p ≈ 1

is the wrong starting point -- a "broken piece" of the model? Can a collection of broken pieces result in good model? For example, you could imagine the "bad player approximation"

p ≈ 0

In physics, we'd call p ≈ 1 or p ≈ 0 different "ansätze" (in reality, physicists would take p ≈ c and fit c to data). In economics, these are different "equilibria" (see David Andolfatto on DSGE models; this correspondence is something I noted on Twitter).

The crux of the issue is: do the broken pieces matter? If I think

p ≈ 1 + o(s/S₀)

but you think my model is broken (or contradicts the data), the model could still be fine if s << S₀. Lots of different models ...

p = f(a) ≈ 1 + o(a/A₀)
p = f(b) ≈ 1 + o(b/B₀)
p = f(c) ≈ 1 + o(c/C₀)

... could all lead to that same leading order p ≈ 1. For example, that leading order piece could be universal for a wide variety of dynamical systems (this is quite literally what happens near some phase transitions in statistical mechanics).

What is interesting in this case is that the details of the system don't matter at all. If this were true in economics (we don't know), it might not matter what your Euler equation is in your DSGE model. Who cares if it violates the data -- it might not matter to the conclusions. Of course, in that example, it does matter: the Euler equation drives most of the macro results of DSGE models, but is also rejected by data.

There are many other ways those extra terms might not be important -- another example happens in thermodynamics. It doesn't matter what an atom is -- whether it is a hard sphere,a quantum cloud of electrons surrounding a nucleus, or blueberries -- to figure out the ideal gas law. The relevant scales are the relative size of the mean free path λ ~ (V/N)^(1/3) and the size of the "thing" you are aggregating (for atoms, this is the thermal wavelength, for blueberries is is about 1 cm) and the total number of "things" (N >> 1). (Although blueberries have inelastic collisions with the container, so they'd lose energy over time.)

The other great part about using scope conditions is that you can tell when you are wrong! If the details of the model do matter, then starting from the wrong "equilibrium" will result in "corrections" that have unnatural coefficients (like really big numbers c >> 1 or really small numbers c << 1) or get worse and worse at higher and higher order -- the (s/S₀)² will be more important than the s/S₀ term. This is why comparison to data is important.

But the really big idea here is that you have to start somewhere. I think David Andolfatto puts it well in his DSGE post:
We are all scientists trying to understand the world around us. We use our eyes, ears and other senses to collect data, both qualitative and quantitative. We need some way to interpret/explain this data and, for this purpose, we construct theories ... . Mostly, these theories exist in our brains as informal "half-baked" constructs. ... Often it seems we are not even aware of the implicit assumptions that are necessary to render our views valid. Ideally, we may possess a degree of higher-order awareness--e.g., as when we're aware that we may not be aware of all the assumptions we are making. It's a tricky business. Things are not always a simple as they seem. And to help organize our thinking, it is often useful to construct mathematical representations of our theories--not as a substitute, but as a complement to the other tools in our tool kit (like basic intuition). This is a useful exercise if for no other reason than it forces us to make our assumptions explicit, at least, for a particular thought experiment. We want to make the theory transparent (at least, for those who speak the trade language) and therefore easy to criticize.
Andolfatto is making the case that DSGE is a good starting point.

But this is another place where economists seem weird about theoretical models to this physicist. Usually, that good starting point has something to do with empirical reality. The pool player analogy would start with p ≈ 1 if good players are observed to make most of their shots.

Economists seem to start with not just p ≈ 1 but assert p = 1 independent of the data. There is no scope; they just armchair-theorize that good pool players make all of the shots they take -- turning model-making into a kind of recreational logic.

[Aside: this is in fact false. Real pool players might actually consider a shot they estimate they might have a 50-50 chance of making if they think it might increase their chance of a win in the long run. But still, p ≈ 1 is probably a better leading order approximation than p ≈ 0.]

Sometimes it is said that asserting p = 1 is a good way to be logically consistent, clarify your thinking, or make it easier to argue with you. Andolfatto says so in his post. But then p = 1 can't possibly matter from a theoretical point of view. It's either a) wrong, so you're engaged on a flight of mathematical fancy, b) approximate, so you're already admitting you're wrong at some level and we're just arguing about how wrong (an empirical question), or c) doesn't matter (like in the case of universality above).

In physics we know Newton's laws are incomplete (lack scope) for handling strong gravitational fields, very small things, or speeds near the speed of light. We'd never say that's fine -- let's try and understand micro black holes with F = ma! The problem is compounded because in economics (in macro at least) there aren't many starting points besides supply and demand, expectations, the quantity theory of money, or utility maximization. In physics, there were attempts to use classical non-relativistic physics to understand quantum phenomena and relativity when it was the only thing around. If all I have is a hammer, I'm going to try to use the hammer. We discovered that regular old physics doesn't work when action is ~ h (Planck's constant) or velocity ~ c (speed of light) -- that is to say we discovered the scope conditions of the theory!

What should be done in economics is use e.g. DSGE models to figure out the scope conditions of macroeconomics. DSGE models seemed to work fine (i.e. weren't completely rejected by data) during the Great Moderation. The Great Recession proved many models' undoing. Maybe DSGE models only apply near a macroeconomic steady state?

In general, maybe rational agents only apply near a macroeconomic steady state (i.e. not near a major recession)?

This is what David Glasner calls macrofoundations of micro (the micro economic agents require a macroeconomic equilibrium to be a good description). Instead of repeating myself, just have a look at this post if you're interested.

Overall, we should head into theory with three principles
  1. Models are approximate (every theory is an effective theory)
  2. Understand the scope (the scale of the approximation)
  3. Compare to the data
Do those three things and you'll be fine!

The weird way economists do theory inverts all of these:
  1. Models are logic
  2. Ignore the scope
  3. Data rejects too many good models
If you do the latter, you end up with methodological crisis (you can't fix bad models) and models that have nothing to do with the data -- exactly what seems to have happened in macroeconomics.


PS Pfleiderer's chameleon models are ones where they assume p = 1, and try to draw policy conclusions about the real world. When those policy conclusions are accepted, the p = 1 is ignored. If someone questions the model, the p = 1 is described as obviously unrealistic.

PPS Everything I've said above kind of ignores topological effects and other non-perturbative things that can happen in theories in physics. There are probably no topological effects in economics and the picture at the top of this post (from here) doesn't look non-perturbative to me.

Monday, June 13, 2016

The urban environment as information equilibrium

Cameron Murray wrote a post a couple weeks ago that made me think about applying information equilibrium to urban economics. Cameron tells us "[t]he workhorse model of urban economics is the Alonso-Muth-Mills (AMM) model of the mono-centric city" and then goes on to look at some of its faults. Here's his picture:

Let's tackle this with the information equilibrium framework.

Let's set up the information equilibrium system (market) $h : R \rightleftarrows S$ where $h$ is building height (proxy for density), $R$ is distance (range) from the center and $S$ is the supply of housing (housing units). And let's assume $R$ varies slowly compared to $S$ -- i.e. transportation improvements and new modes (that fundamentally change the city's relationship with distance) happen slowly compared to adding housing supply. This puts us in the partial equilibrium regime with $R$ as a "constant information source" (see the paper). Height $h$ is the detector of information flowing from the distance from the center to the housing supply; in economics, we think of this as a "price".

The "market" above is shorthand for the information equilibrium condition (with information transfer index $k$)

h \equiv \frac{dR}{dS} = k \; \frac{R}{S}

which we can solve for slow changes in $R$ relative to $S$ (and then plug into the "price" $h$) to obtain (with free parameters $R_{0}$ and $S_{ref}$ are model parameters):

h = \frac{k R_{0}}{S_{ref}} \exp \left( -\frac{\Delta R}{k R_{0}}\right)

Here's a plot of this function:

You could of course substitute housing price $p$ or density $\rho$ for $h$ (or more rigorously, set up information equilibrium relationships $p \rightleftarrows h$ or $\rho \rightleftarrows h$ so that e.g. $p \sim h^{\alpha}$ so that $p \sim \exp \; -\alpha \Delta R$).

Now markets are not necessarily ideal and therefore information equilibrium does not hold exactly. In fact, it fails in a specific way. The observed height $h^{*} \leq h$ (because the housing supply $S$ can only at best receive all of the information from $R$, i.e. $I(R) \geq I(S)$, a condition called non-ideal information transfer), so what we'd see in practice is something like this:

Here's a logarithmic scale graph:

This is not too different from what is observed (assuming price is in information equilibrium with building height $p \rightleftarrows h$) from here [pdf]:

In short, information equilibrium provides a pretty good first order take as an urban economic model. You can see that height restrictions and other zoning ordinances or preserved green space end up impacting the observed height negatively -- i.e. non-ideal information transfer.