Wednesday, February 10, 2016

One more physics analogy

David Glasner found a back-and-forth between me and a commenter (with the pseudonym "Avon Barksdale" after [a] character on The Wire who [didn't end] up taking an economics class [per Tom below]) on Nick Rowe's blog who expressed the (widely held) view that the only scientific way to proceed in economics is with rigorous microfoundations. "Avon" held physics up as a purported shining example of this approach.

I couldn't let it go: even physics isn't that reductionist. I gave several examples of cases where the microfoundations were actually known, but not used to figure things out: thermodynamics, nuclear physics. Even modern physics is supposedly built on string theory. However physicists do not require every pion scattering amplitude be calculated from QCD. Some people do do so-called lattice calculations. But many resort to the "effective" chiral perturbation theory. In a sense, that was what my thesis was about -- an effective theory that bridges the gap between lattice QCD and chiral perturbation theory. That effective theory even gave up on one of the basic principles of QCD -- confinement. It would be like an economist giving up opportunity cost (a basic principle of the micro theory). But no physicist ever said to me "your model is flawed because it doesn't have true microfoundations". That's because the kind of hard core reductionism that surrounds the microfoundations paradigm doesn't exist in physics -- the most hard core reductionist natural science!

In his post, Glasner repeated something that he had before and -- probably because it was in the context of a bunch of quotes about physics -- I thought of another analogy.

Glasner says:
But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

This hits on a basic principle of physics: any theory radically simplifies near an equilibrium. One way this manifests is through new effective degrees of freedom. I'll take an example from some (I guess not-so) recent news: the Higgs boson. The Higgs mechanism is based on "spontaneous symmetry breaking" where the vacuum state, instead of being zero, has e.g. some positive energy value. What happens is that the universe falls from an unstable equilibrium to a new stable one -- typically illustrated by a potential energy surface shown in this diagram:

The unstable vacuum state of the universe is brown and the stable vacuum state is dark blue (at least one of them). This blue state also "breaks" the rotational symmetry of the diagram (and it falls there "spontaneously"). Additionally, the perturbative theory around the blue vacuum is much simpler -- near the equilibrium it consists of non-interacting massive particles (the degrees of freedom in the upward curved direction) and massless "Goldstone bosons" in the flat circular direction (blue circle). These are new simplifying -- effective -- degrees of freedom. The theory at the brown point is a much more complex interacting theory.

How does this relate to what Glasner says? Well consider a macroeconomic state space with multiple stable equilibria, like this:

Generally, the fundamental theory is complex. However, in the neighborhood of a stable equilibrium (as Glasner says), the theory simplifies with new effective degrees of freedom ... for example: optimizing agents with rational expectations. Glasner's "macrofoundations" of these effective rational agents are analogous to the equilibrium vacuum state of the universe giving us the simplified effective theory.

One way to interpret this is that rational agents are a fiction -- the true microfoundations are the microscopic theory underlying the locations of the equilibria. In the analogy, the true microfoundations would be the Higgs field, not the simplifying Goldstone boson representation in the observed vacuum state. The latter are a simplifying fiction in the neighborhood of the equilibrium.

A second way to interpret this is that it is possible we have an effective theory of rational agents when we are near equilibrium. It is possible we have effective rational agents like in this emergent picture and even an effective intertemporal budget constraint

The first case would be the ultimate paradox for the hard core reductionist view of economics. The rational optimizing agents they think are true microfoundations are just effective degrees of freedom that should be derived from a more complex, more fundamental theory.

But in physics, we take the second view -- because physicists aren't that reductionist. A theory that works is the best theory. And that's not necessarily the more fundamental one.

Tuesday, February 9, 2016

Production possibilities and Brownian motion

At the end of the previous post, I discussed how a bowed-out [1] production possibilities frontier (PPF) could arise from essentially Brownian motion -- and I mentioned that I'd do a simulation to demonstrate that idea. I took random paths (of length 1000) that correspond to king moves on an infinite chessboard, starting in the bottom left corner. These paths look like this (we'll get to the blue line, but the blue dot represents the average end point ... and for one path, that is just the end point):

What is the average end point of 1000 paths?

The average end point seems to coincide with that blue line -- that's because it's the diffusion length D for 1000 time steps of a random walk with a step size of (roughly) unit length. You don't need to worry about the fact that one of the steps sizes is ~ 1.4. Anyway, the level curves of the (smoothed) density histogram for the end points are bowed out (the diffusion length is roughly the level curve corresponding to the mean value of the end point):

So we'd generally expect a two-good economy to be approximately the diffusion length D away from the origin [2]. Since this is a radius, this forms a set of points (PPF) that is bowed-out relative to the line between (D, 0) and (0, D).

Note that if there was some kind of constraint -- e.g. budget constraint B -- that was close to D, it could have some impact on the shape of the level curves. If B >> D, then there is little impact and D determines the PPF; if B << D, then B determines the PPF. Additionally, there could be reasons that the diffusion might favor movement parallel to the axes, resulting in a "bowed-in" PPF (e.g. economies of scale).

But without other considerations or constraints, the default should be bowed-out PPFs and upward sloping demand supply curves.



[1] By "bowed-out", we mean what Nick Rowe means. Bowed-out PPFs mean upward sloping supply curves.

[2] If diffusion was asymmetric, you'd end up with an asymmetric diffusion length and quarter-ellipse PPFs.

Monday, February 8, 2016

Production possibilities and the slope of the supply curve

There was a discussion on the blogs about teaching the Production Possibilities Frontier [PPF] (or curve) for two goods (say, Apples and Bananas) in introductory economics classes. Brad DeLong started it; Paul Krugman joined in. Then there was more from DeLong.

Nick Rowe went over it awhile ago in a post, and commented on DeLong's first post above, saying:
How can you (easily) explain (e.g.) why supply curves slope up without using a (curved) PPF?

Which is similar to what he said in his post:
Which means the PPF is now curved, and bowed out. ... Which means that the supply curve of apples will slope up.

Now I reconstructed the PPF using an information equilibrium model in this post based on Rowe's post. It turns out the PPF is a level curve of the production possibilities surface constructed from the quantity-weighted sum of the supply curves (surfaces) for the two goods. Here are the supply and demand diagrams (assuming the markets are independent and e.g. the supply curve line becomes a plane):

Here are the supply surfaces together:

And here is their quantity weighted sum -- the production possibilities surface [PPS] -- (with level curves, aka various PPF's):

If you take flat supply surfaces:

and take the quantity weighted sum, you get straight lines for your PPF's:

which is exactly how it works at Nick Rowe's post. But I did want to add a bit here. Nick Rowe's comment at DeLong's blog seems to suggest that a curved PPF "explains" the upward sloping supply curve. However, since those PPF's are level curves of the quantity-weighted sum of the two supply surfaces, the idea that "the PPF bows out" (the level curves of the PPS are bowed out) and the "supply curve for a single good slopes up" (i.e. the PPS has curvature) are not logically independent of each other. That is to say, they mean the same thing -- there is no knowledge added to a bowed out PPF to makes it lead to a upward sloping supply curve.

The curvature of the PPS is determined by weighting a locally linear supply curve 

P = a S + b

with a > 0 (i.e. upward sloping) by the quantity supplied, so we get

P × S = a S² + b S

Or in both directions:

P₁ × S₁ + P₂ × S₂ = a₁ S₁² + b₁ S₁ + a₂ S₂² + b₂ S₂

with a₁  and a₂  >  0 which is locally a paraboloid. Therefore the statements

P = a S + b

P₁ × S₁ + P₂ × S₂ = a₁ S₁² + b₁ S₁ + a₂ S₂² + b₂ S₂

are not logically independent. A "bowed out PPF" defines supply curves as upward sloping, and upward sloping supply curves defines the PPF as bowed out. a bowed PPF curve doesn't "explain" the supply curve any more than non-Euclidean geometry "explains" why the parallel postulate fails.

Now I think Rowe could mean that the bowed PPF curve is more intuitive than an upward-sloping supply curve -- and that I would completely agree with. Trying to explain why a supply curve slopes up is hard. Suggesting that the PPF might bow out? Easy. Think of filling exploring a state space with a random walk maybe with occasional jumps (instead of occupied, think visited) [1]:

Would exploring this space look more like a triangle, a bowed-out curve or bowed-in curve? You'd probably say "it depends" and you'd be right. But the choice that requires the least amount of additional assumptions is bowed-out (think of it as a quarter of a random walk starting at zero in an 2-dimensional space). The triangle essentially requires a budget constraint, and the bowed-in curve requires a reason to prefer the axes. If we think of this as two in n >> 1 dimensions, then the most likely place to find the "explorer" is near the PPF at the surface (since most points of a higher dimensional volume are near its surface) -- with a radius essentially given by the diffusion constant D.

I'll see if that works out with some numerical simulations in a future post. But that would explain why the supply curve slopes up: diffusion from zero leads to a circular regions bounded by the PPF with radius ~ √(D t), which is equivalent to an upward sloping supply curve.



[1] My extra explanations (visited vs occupied, jumps) are here just so I could re-purpose this figure I drew for what was going to be a purely MaxEnt description of the PPF -- which is a bit harder than I originally thought. You shouldn't thin of the states inside the PPF as "occupied" (as in the diagram), but rather "explored". It is sketched out above, though.

Economics is not logic and therefore not true or false

Mark Thoma linked to an article by Edward Prescott (of RBC aka Kydland-Prescott and HP filter fame) which I haven't read. I made it through the abstract to this sentence:
Reality is complex, and any model economy used is necessarily an abstraction and therefore false.

This is a dumb sentence. It either has zero content or is wrong. Prescott is probably using it to defend his RBC theory against the fact that it doesn't really match up with empirical data, but even then it's still a dumb sentence. 

First: abstractions are not falsehoods (i.e. antonym of tautology). They are abstractions. All of mathematics is an abstraction, but 2 + 2 = 4 is not false. An abstraction may be an approximation of reality with varying degrees of usefulness. But usefulness is a continuum, not a Boolean.

Second: approximate theories are not false. This is the same sense of "false" that (apparently) Popper viewed Newtonian physics compared to Einstein. People who know the theories understand that one is an approximation to the other with a certain scope. How would calling an approximation "false" be helpful? What do we learn from the sentence (translating):
Reality is complex, and any approximation used is necessarily approximate and therefore approximate.
I hope you wrote that down in your copybooks. It's going to come in handy never. You can capture the same idea by saying "we haven't really figured economics out yet". Really? You don't say ...

This dumb sentence comes from a dumb view of economics; Brad DeLong gives us a quote from Keynes that encapsulates this dumb view:

It seems to me that economics is a branch of logic, a way of thinking ...

No it's not. It's an attempt to see if a real world system involving numerical quantities has regularities expressible in terms of mathematics [1]. You can use logic in economics, but economics is not a branch of logic. If it was a branch of logic, you could do economics without ever collecting data. This is something that is false in a useful sense of the word.



[1] I do always find it funny that people think economics has too much math. Are they ok with physics having that much math? Well physics doesn't even have any explicit numbers -- all of the numbers in physics are made up (human defined) quantities. Gravitational acceleration on Earth is 9.8 m/s/s. But we made up meters and seconds. Economics can actually have explicit numbers: e.g. two widgets for six tokens of money. You can count them.

Saturday, February 6, 2016

Computing Nash equilibria is intractable

We show that computing a Nash equilibrium is an intractable problem. Since by Nash’s theorem a Nash equilibrium always exists, the problem belongs to the family of total search problems in NP, and previous work establishes that it is unlikely that such problems are NP-complete. We show instead that the problem is as hard as solving any Brouwer fixed point computation problem, in a precise complexity theoretic sense. The corresponding complexity class is called PPAD, for Polynomial Parity Argument in Directed graphs, and our precise result is that computing a Nash equilibrium is a PPAD-complete problem.

In view of this hardness result, we are motivated to study the complexity of computing approximate Nash equilibria, with arbitrarily close approximation. In this regard, we consider a very natural and important class of games, called anonymous games. These are games in which every player is oblivious to the identities of the other players; examples arise in auction settings, congestion games, and social interactions. We give a polynomial time approximation scheme for anonymous games with a bounded number of strategies.
That is from the abstract of Constantinos Daskalakis's thesis [pdf], underlining emphasis mine. I had a tweet about this earlier this week, but I wanted to say a bit more about it. Since we can view the market as an algorithm to solve a computational problem (I like this blog post and there is also this paper), the market is probably not a Nash equilibrium. What is also interesting is that the Nash equilibrium problem is as hard as solving the Arrow-Debreu equilibrium problem (an application of the Brouwer fixed point theorem) ... although that shouldn't be surprising since the proof of existence of Nash equilibria can also proceed via Brouwer's theorem.

Daskalakis proceeds to look at algorithms to find approximate Nash equilibria, but this result makes me think that economic equilibria may not be even approximate Nash or Arrow-Debreu equilibria. The process of tâtonnement (an algorithm for finding approximate equilibria) would proceed until you reached either a satiation point (one all parties can live with) or simply the most likely point (the average of all possible system configurations) ... i.e. the maximum entropy point (see also this).

Also, one of the ways to "break" intractability is with randomness -- that is the idea behind Monte Carlo algorithms  -- see this old Scientific American article [pdf]. Remember when Scientific American was good?

Economists will probably just ignore this like they ignore the SMD theorem [Kirman, pdf], though.

Friday, February 5, 2016

The long and short of interest rates

I updated the long and short term interest rate graph from this post with latest data and one- and two-sigma error bands (errors for monthly data, 1960-2016):


Update 6 Feb 2016 

Linear scale graph:

Thursday, February 4, 2016

The IS-LM model as an effective theory at low inflation

No one seems to like the poor old IS-LM model except Paul Krugman. I thought I'd defend it as an effective macro theory. What does that mean? It means it is a theory that operates given a certain scope. For IS-LM, assuming the IT model is the "true" theory, that scope is low inflation. Here's how it works:

Start with the AD-AS model (with aggregate demand aka nominal output N and aggregate supply S), and introduce money (M)

N ⇄ S

N ⇄ M ⇄ S

The produces an information equilibrium relationship

P : N ⇄ M

The P just names the abstract price (the price level) and the above notation means:

P ≡ dN/dM = k (N/M)

We can solve this differential equation to obtain:

N ~ Mᵏ

P ~ k Mᵏ⁻¹

That should be a k - 1 in the exponent, not (M^k)^-1. If k ≈ 1 (one way of stating our scope condition), then

N ~ M

P ~ constant

so that inflation π ≈ 0 (another way of stating our scope condition [1]). If inflation is "small", then

N ~ Y

where Y is real output. Now let's introduce another set of information equilibrium relationships

p : N ⇄ MB
i ⇄ p

where MB is the monetary base, p is the abstract price of money and i is the nominal (short term) interest rate. If π ≈ 0, then we can write this as:

p : Y ⇄ MB
r ⇄ p

using N ~ Y and i ~ r where r is the real interest rate. The differential equations associated with this pair of information equilibrium relationships has a similar solution in general equilibrium to the one above, but I'll write it differently:

(1) log(r) = c log(b Y/MB)

Now which variables do you think will react the fastest to monetary expansion? I'd go with the monetary base and the interest rate, rather than real output. That puts us on the partial equilibrium solution where Y "moves last" (in economics parlance). In physics terms, this is analogous to an isothermal expansion of an ideal gas (an 'iso-output' expansion of an economy) that looks like this:

Those angle brackets mean ensemble averages (weighted average over random markets).

The interest rate falls and the monetary base expands from MB₁ to MB₂. Note at this level of effective theory, it is not important which one causes the other one -- increasing the base lowers interest rates or lowering interest rates expands the base. As Y rises, interest rates will come back up and equation (1) will hold -- general equilibrium restored.

What about fiscal policy? That's the second diagram above. In that case, the central bank "moves last". Fiscal expansion should take us from Y₁ to Y₂, but if the central bank didn't let rates rise or the base expand, you could end up back where you started. This is the basic monetary offset picture that happens when inflation is low and nominal (and real) interest rates are positive.

That captures most of the mechanics of the IS-LM model; you could rewrite this in terms of investment if you wanted (as I do here). And there is a bit more to the details of how it works, but the theories are roughly isomorphic to each other.

Paul Krugman points out in the liquidity trap argument everything changes at the zero lower bound (or basically, anytime interest rates are very low ... remember we're still in a limit where inflation is very low as well). This is the limit where r ≈ 0⁺. Here we have a different-looking pair of diagrams:

The general idea is the same, but the magnitudes are very different. A large expansion of the base drops your interest rate to zero without moving output very far to the right (monetary expansion is ineffective). A large fiscal expansion can happen without large movement in the monetary base (i.e. monetary offset doesn't work). In this case, we have a model that captures the essence of the liquidity trap.

Recall that we built this out of the AD-AS model, but we did it by restricting the scope to r ≈ 0⁺, π ≈ 0 (or just π ≈ 0 for the IS-LM model) [1]. This doesn't mean this model is right, but it does mean if you are using the AD-AS model where inflation is low and interest rates are low, then you are effectively using the IS-LM model (if you are doing it correctly).

And since the above model does pretty well empirically (see the paper or the forecasts), it means that any theory that describes the data (when π ≈ 0) will also be approximated by the IS-LM model.

An analogy: since Newton's inverse square law describes the orbits of planets fairly well, any theory that describes the orbits of planets fairly well will be approximated by Newton's inverse square law. This is actually true of general relativity (which reduces to Newton's inverse square law) and will also be true of the fundamental quantum theory of gravity -- should it exist.



[1] Strictly speaking, these inflation and interest rates are small compared to some other scale in the theory, in this case we could use the average growth rate of the monetary base (minus reserves) r << μπ << μ. In the US, this has been about 7% per year. Inflation is about 1-1.5% and (short term, nominal) interest rates are 0.3-0.5% -- both are less than 7% and that gives us an idea that the error in this model could be ~10% ... i.e. 0.005/0.07 ~ 0.01/0.07 ~ 0.1.