Saturday, February 28, 2015

Sumner to quantitative analysis: drop dead

Tony Yates calls out market monetarists to get quantitative:
I’m sure these mix-ups would get ironed out if [market monetarists] stopped blogging and chucking words about, and got down to building and simulating quantitative models.
Scott Sumner decides instead to set Western civilization back a thousand years:
In my view economists should forget about “building and simulating quantitative models” of the macroeconomy, which are then used for policy determination. Instead we need to encourage the government to create and subsidize trading in NGDP futures markets (more precisely prediction markets) and then use 12-month forward NGDP futures prices as the indicator of the stance of policy, and even better the intermediate target of policy.
Sumner apparently doesn't even want to simulate an economy with an NGDP futures market before using one to guide the economy. And it's not like you can't just set one up and say it's not going to influence policy decisions. If you set up a liquid NGDP market, but said it was only an experimental measure with regard to policy, markets could still crash on news of a crash in the NGDP market.

"We think this NGDP market represents the wisdom of crowds, but please ignore it if it does anything weird."

Note that Sumner is trying to set up an NGDP prediction market. The thing is that inasmuch as people don't believe the prediction market actually works is it not dangerous to the economy. If people believed it was working (was liquid enough, had enough volume, enough diversity of participants, or whatever), then its movements could strongly affect economic sentiment and spark a panic. This is where market monetarism's reliance on expectations comes back to haunt it.

Now let's crowd-source a launch vehicle and send people to Mars without testing it!


Note that project Vanguard actually did test the components of that rocket before that happened. Because reason.

Thursday, February 26, 2015

Market monetarism and the Keynesian beauty contest

Attention: conservation notice. Over 1500 word essay about universality classes and kittens.
In my recent post on expectations, I wrote this:
Overall, I'd sum up my problem with the centrality of expectations with a question to Scott Sumner: what happens to your theory if markets don't believe market monetarism? In a sense, this question cannot be addressed by market monetarism. The "market" piece presumes markets believe the theory (i.e. market expectations are consistent with market monetarism, i.e. assuming rational expectations in market monetarism ... I called this circular reasoning before, but charitably, this could be taken that market monetarism is the only self-consistent theory of expectations as I mention in a comment at that link).
One thing I forgot about until I was doing some searching on Sumner's blog was that Sumner had basically assumed it explicitly:

Markets are market monetarists (23 Mar 2012)
It’s not surprising that the markets are market monetarist, as my views of macro were largely developed by watching how markets responded to the massive and hence easily identifiable monetary shocks of the interwar period.  That’s why I never lose any sleep at night worrying about whether market monetarism will ever be discredited; I know the stock market agrees with me.
I'd like to expand on what I meant by market monetarism claiming to be the only self-consistent theory of expectations. I borrowed the phrase self-consistent from physics; let me elaborate on what I mean by that [1]. I'd also like to better explain why I think market monetarism is more an ideological movement than an economic theory.

Let's call market monetarism M, which is a functional of expectations E, i.e. M = M[E]. But additionally, the ouptut of "market monetarism" as an economic theory defines the expectations one should have given a set of economic variables n, m, ... (say, NGDP, base money, ...). That is to say, M[E] gives us E as one of the outputs. This is rational expectations, aka model consistent expectations. So what we have is this:

(1) E(n, m, ...) =  M[E = E(n, m, ...)]

There are many paths of variables (n, m, ...) that can lead to the same expectations E (it's called indeterminacy), but that's not important right now [2]. Basically, expectations held by the market represent a fixed point of M ... like a Nash equilibrium of some expectations game. This is all well and good, and is really just a straightforward application of rational expectations. You could say the same of a New Keynesian theory ... E = NK[E]. In fact, a wide class of theories can have fixed points like this (any RBC or DSGE model, for example).

The thing is that market monetarism doesn't think there is that kind of freedom, and the reason is that market monetarism is almost entirely expectations. This is an uncontroversial categorization of market monetarism. For example, Scott Sumner wrote a post to that effect:

Money and Inflation, part 5: It’s (almost) all about expectations (1 Apr 2013)

And here is a quote from a Nick Rowe comment at Tony Yates' blog that's even more concise:
Monetary policy is 99% expectations, so how monetary policy is communicated is 99% of how it works.
Why does market monetarism's insistence that the theory is 99% expectations lead inexorably to the conclusion that market monetarism makes the (erroneous) claim to be the only theory of expectations and therefore indisputable? Let me tell you.

In general, the other theories have explicit dependencies on observable quantities x like the monetary base (i.e. not only do expectations have something to do with whether x is well above or below trend but that expectations can be anchored by real observable quantities):

E(x) = NK[E(x), x]

but in the market monetarist model we have, expanding around x = 0:

E(x) = M[E(x), x]

E(x) ≈ M[E(x)] + α x  ... with α << 1

Concrete steppes (like QE) have little to do with expectations and as Nick Rowe said above, the theory is 99% expectations (i.e. α ~ 0.01). Scott Sumner would have to lump what I call α x into a error term or "systematic error" term SE in his post here. It represents the difference between pure expectations (an NGDP futures market or credible central bank target) and reality. I'd call it the influence of the actual value of NGDP on the expected value of NGDP.

So what's wrong with that, you say? Well, the explicit dependence on x above is what makes these theories with expectations different from each other since they're all based on people making decisions. It's what forms the basis of the model dependence of the expectations. In Keynesian models, x includes things like interest rates and unemployment. In RBC models, x includes "technology". The lack of an explicit dependence on x in M is what I mean by model independent expectations in this post. It also couples the theory to the empirical data. Without it the only dependence on x is via E(x) -- that is to say the value of x doesn't matter, it's what people think x is (like what Allan Meltzer thinks inflation is, or what Republicans think Obamacare is).

Here's the kicker, though. Since all of these theories are based on economic agents (people) with human expectations, market monetarism is making the claim to be the only expectation-based theory. If we expand around x in a generic theory T ...

E(x) = T[E(x), x] ≈ T[E(x)] + τ x + ...

(2) E(x) ≈ T[E(x)]

T is completely unspecified right now. Now there are two possibilities here. First, is that equation (2) doesn't specify T. In that case, whether T = NK or T = M depends on what humans believe and there is no specific theory of pure expectations (or you just have to convince the market that T = X and it is entirely political ... X could be communism or mercantilism or the Flying Spaghetti Monster). 

Obviously market monetarists don't believe that. Therefore they must believe that equation (2) specifies T. In that case, market monetarists are claiming T = M. Basically, the first term in any expansion of any theory in x is M (i.e. the first term is unique) ...

T[E(x), x] ≈ M[E(x)] + τ x + ...
NK[E(x), x] ≈ M[E(x)] + k x + ...
M[E(x), x] ≈ M[E(x)] + α x + ...

But also, α = τ = k is small (according to market monetarism), so the first term is all that matters! If you expanded a theory and didn't end up with M[E(x)] as a first term, then whatever that expansion was would be a theory equally valid to market monetarism.

Physicists out there probably recognize this idea: market monetarism is making the claim (without proof or comparison to empirical data) to be the universality class of macroeconomic systems. Universality classes are why the same kinds of processes happen in totally different systems, or things like the normal distribution show up everywhere.

This is a lot different from e.g. Scott Sumner just saying "my theory is right". It is Scott Sumner saying "every theory reduces to my theory" [3]. He is not explicitly saying this; it is implicit in his argument that the theory is primarily expectations and somehow unique (or at least has a reason to be advocated besides pure opinion).

Now that I've gone off the deep end of abstraction, let me close with something concrete to show how preposterous this is.

Kittens.

Yes, kittens. NPR's Planet Money did an experiment to illustrate Keynes' "beauty contest" in markets, but it gives us an excellent illustration of expectations and theories of expectations. Planet Money put up three pictures of animals (a kitten, a slow loris -- my personal favorite, and a polar bear cub) and asked people not only which one they thought was the cutest, but which one they thought everyone else would think is the cutest ... i.e. the expected winner of the poll.

Here are the images and results:

Picture from NPR's Planet Money.



I have a couple of theories for the result. The first one is that the most commonly experienced critter will be expected to win the poll. Call this theory MC, and it depends on the actual data of which critter is most common. Call that c. The second theory is that the one with the biggest eyes (relative to body size) will be expected to be the winner. Call this BE and it depends on eye size e. Both of these theories will expect the kitten to win. In our notation above, we have the self-consistent (e.g. Nash) equilibria: 

E(c) = MC[E(c), c]

E(e) = BE[E(e), e]

(If you repeat the "game" with the same pictures, the result would rapidly converge to nearly 100% for the kitten for the expected result.)

Now if we make the market monetarist assumption that the empirical values (c and e) have little influence on the result we can say:

MC[E(c), c] ≈ MC[E(c)] + α c
BE[E(e), e] ≈ BE[E(e)] + α e

With α being small. The alpha terms measure how much the fact that kittens are actually the most common critter of the three influences what people think the most common critter is (or how big the critters' eyes are measured to be in the pictures influences how big people think they are). Now take α → 0 and look at our self-consistent theories above:

E(c) ≈ MC[E(c)]
E(e) ≈ BE[E(e)]

These are not the same theory! However, the market monetarist claim is effectively saying that the "Most Common" theory and the "Big Eyes" theory must be equivalent -- or else they're effectively advocating something that is pure opinion (taking α → 0 has decoupled our theory from the 'concrete steppes' of empirical data).

Update 27 Feb 2015:

The Keynesian beauty contest above also illustrates Noah Smith's contention of uninformative data being the reason we can't select between different theories in macro. We'd need to see a lot more critters to determine which of the "big eyes" or the "most common" theories were correct (or maybe neither of them).

Footnotes:

[1] What I write here is actually borrowed a lot from physics; E(x) is a quantum field, M[...] is essentially a path integral given a Lagrangian ('the theory') in an abuse of notation. So one would view E(x) = M[E(x)] as a matrix element/expectation value, in a self-consistent field approach.

[2] Sumner:
Unfortunately the role of expectations makes monetary economics much more complex, potentially introducing an “indeterminacy problem,” or what might better be called “solution multiplicity.”  A number of different future paths for the money supply can be associated with any given price level.  Alternatively, there are many different price levels (including infinity) that are consistent with any current money supply.
[3] I should add "as you decouple it from empirical data" to that quote. As α → 0 (or k or τ) the theory decouples from direct contact with empirical data. It is no longer about empirical data, but what you interpret markets (or important economic actors like the central bank) to think about empirical data.


Wednesday, February 25, 2015

Do the different market models work simultaneously?

Commenter LAL asked if the models (that I fit to empirical data) I presented all together for the first time in this draft of a section [1] of an eventual paper on information equilibrium could work simultaneously. For most of them, this is actually a trivial direct product since the variables and parameters aren't directly related, except by having a common information source NGDP = N (the IS-LM model has a common 'price' in the interest rate i).

However, there is one potential issue, but it isn't relevant to the post [1] as presented. In [1], I used total hours worked (H) as the information receiver (destination) in the labor market, which would have independent parameters from the total employed (L) used in the Solow model. The issue is that one could use L as the receiver in the labor market, which means the model parameters would become a subset of the Solow model and so the models would have to be solved simultaneously. The prices (detectors) in the Solow model weren't used, so we can take p2 to be the price level P without impacting the results.

I drew up a diagram to illustrate the model (aggregate demand, N, in the center as the common information source)  and the potential for conflict between the two models (Solow highlighted in blue and labor in red). The detectors are all shown as labels on the arrows (all of the notations are as in [1]):


Sure enough, the fits to the Solow model and the labor market model can work simultaneously without adding additional parameters (if you take advantage of the normalization freedom of the price level). Here are the results:


I changed the colors to match up with the diagram above, but otherwise this is the same result. So, yes, the models work simultaneously. Note that I showed here that H and L are in information equilibrium themselves (n.b. the difference in notation where L is E at the link because I used L instead for the civilian labor force), so there is no additional conflict between the two models of the labor market (it turns out information equilibrium -- IE -- is a equivalence relation, so if A in IE with B and B is in IE with C, then A is in IE with C).

This plucking model, redux



The plucking model is back on the blogs (H/T Mark Thoma, for both the link and writing the blog post that introduced me to the model in the first place). As a reminder, temporary bouts of non-ideal information transfer (or reversals of the "second law" of economics), basically lead to a "plucking model" view:

http://informationtransfereconomics.blogspot.com/2013/12/this-plucking-model.html

http://informationtransfereconomics.blogspot.com/2014/05/the-effect-of-expectations-in-economics.html

http://informationtransfereconomics.blogspot.com/2014/12/an-information-transfer-traffic-model.html

http://informationtransfereconomics.blogspot.com/2013/12/plucking-rgdp-growth.html

Jaynes on entropy in economics



The idea behind the information equilibrium framework for economics is an attempt to leverage the very useful principle of maximum entropy formulated by E.T. Jaynes without presuming the form of the economic "constraints", i.e. the conservation laws and forces.

Back in 1991, Jaynes mused about applying entropy to economics, and I present a selection here. I hadn't gone back and re-read the paper since developing my own concept of economic entropy, and makes me think I should look at entropy gradients.

Anyway, I think this discussion helps make the idea of how an economy can work without rational, utility maximizing agents -- economic entropy appears to be governed by NGDP = N, with S ~ log N! ~ N log N and we can show ΔS ~ ΔN. What is interesting is that e.g. recessions appear to be violations of what would be called the "second law of economics". The reason you only get brief microscopic violations of the second law of thermodynamics, but you can get serious violations in economics is that humans will spontaneously coordinate their actions ... in a panic or a less severe sudden onset of economic pessimism (reducing entropy).

Anyway, I was unable to directly link to the paper, but here's a Google search that has it at the top. Enjoy!

https://www.google.com/search?q=entropy+economics+jaynes

HOW SHOULD WE USE ENTROPY IN ECONOMICS?
E. T. Jaynes

What that [economic entropy picture] suggests is the following. Even though a neighboring macroeconomic state of higher entropy is available, the system does not necessarily move to it. A pile of sand does not necessarily level itself unless there is an earthquake to shake it up a little. The economic system might just stagnate where it is, unless it is shaken up by what an Englishman might call a "dither" of some sort.

...

In our conjectured picture of things, the dither that prevents economic stagnation and drives us up the entropy hill is a kind of turbulence injected into the macroeconomic variables by fluctuations in the underlying microeconomy. By this means, the macroeconomic state is constantly driven to "exploring the possibilities" of neighboring states.

...

In economics, the idea of the dither was anticipated by Keynes, who attributed it to "animal spirits", which cause people to behave erratically. We think of the dither in more general terms, simply the result of many independent individual decisions, not necessarily erratic or irrational. Indeed, most individuals will act according to what seem to them, "rational expectations." However, without the entropy factor Keynes did not find the phenomenon that our model considers a fundamental cause of economic change. In constantly exploring the neighboring states, the economy is always more likely to move to one of higher than lower entropy, simply because there are more of them (greater multiplicity). Thus the dither not only introduces random uncertainty into macroeconomic variables, it drives a systematic movement of the economy. In fact, mathematical analysis shows that the average drift velocity in the macroeconomic space is proportional to [the entropy gradient]

The cobweb model


In writing the posts about expectations I stumbled across the cobweb model of how expectations can impact prices in the short run, creating periodic fluctuations and volatility. The basic idea is that there are two possibilities for a series of price adjustments, convergent and divergent [from Wikipedia]


If we take the price elasticity conditions $e^{s} < |e^{d}|$ for convergent and $e^{s} > |e^{d}|$ and use the information transfer model for the elasticities (convergent case):

$$
\frac{\kappa Q_{0}^{s}}{Q_{ref}^{s}} \lt \left| - \frac{Q_{0}^{d}}{\kappa Q_{ref}^{d}} \right|
$$

If we assume an equilibrium market price $P_{0}$, then we can say the information source $Q_{0}^{d}$ and destination $Q_{0}^{s}$ are equivalent as well as the initial conditions ($Q_{ref}^{x}$)

$$
Q_{0}^{d} = \kappa P_{0} Q_{0}^{s}
$$

$$
Q_{ref}^{d} = \kappa P_{0} Q_{ref}^{s}
$$

we obtain the conditions:

$$
\frac{\kappa Q_{0}^{s}}{Q_{ref}^{s}} \lt \frac{\kappa P_{0} Q_{0}^{s}}{\kappa^{2} P_{0} Q_{ref}^{s}}
$$

$$
\rightarrow \;\;\; \kappa \lt \frac{1}{\kappa}
$$

$$
\rightarrow \;\;\; \kappa^{2} \lt 1
$$

for the convergent case, and analogously

$$
\kappa^{2} \gt 1
$$

for the divergent case. Basically the convergence and divergence is set by the value of $\kappa$.

Tuesday, February 24, 2015

Stubborn theoretical ideas

Commenter LAL pointed me to a working paper by Chris Sims [pdf] on rational inattention that represents the closest approach by an economist to the way I am using information theory.

But it also usefully sums up the history of economic thought and is an excellent demonstration of path dependence in theoretical pursuits. Economics got its own ball rolling with the idea that people try to maximize utility, which explained market forces like supply and demand ...

Idea: supply and demand

Problem: how does it work?

Solution: people make utility maximizing decisions

Problem: persistent unemployment

Solution: Phillips curve, sticky prices

Problem: they change

Solution: add expectations [1]

Problem: intractability

Solution: rational expectations [2]

Problem: EMH not exactly true

Solution: rational inattention

...

Utility maximization was an important step in economics as it lead to an understanding of the basic forces of markets back in the 1800s. I think it may have outlived its usefulness, though ... but it's not the fault of economics as a field. Here's an analogous progression in physics, where humans behaving as utility maximizers is switched out with the ether that was used to explain how light could be a wave (a more successful theory than the corpuscular theory).

Idea: light waves

Problem: waves propagate in medium

Solution: ether

Problem: shouldn't we be moving with respect to ether?

Solution: partial ether dragging

Problem: can't measure movement with respect to ether

Solution: complete ether dragging

Problem: inconsistent with astronomy

Solution: length contraction

Problem: nature appears to be conspiring to prevent measuring ether motion

Solution: speed of light is constant and there is no ether

I had always wondered why the idea of ether took so long to dislodge. I think there is an entropic force that works to counteract changes in theoretical ideas ...

Seriously, when you create a model to explain how something works (light waves, markets), but then the main idea of that model (ether, utility maximization) starts to be the source of all the new problems, with new work-arounds that essentially seek to diminish the impact of the original model, then you are probably having a path dependence problem. The history of electromagnetism since the advent of waves consists primarily of an effort to minimize the impact of the ether introduced so that people could accept light as waves. Analogously, the history of economics since the advent of supply and demand consists primarily of an effort to show how deviations from utility maximization can be explained.

I'd like to think that I could add a bit at the bottom of the economics list ...

Problem: conspiracy of factors preventing human utility maximizing behavior from impacting markets [3]

Solution: hey, maybe markets aren't the result of utility maximizing behavior?


Footnotes:

[1] The idea that Keynesian economists neglected the inflation-augmented Phillips curve is something of a straw man argument, but it is useful to include here because there is a difference between considering the theoretical idea and implementing it in a model.

[2] One thing I'd like to note is that the SMD theorem basically says that rationality assumptions do not carry over from micro to macro, so all of this should have been nipped in the bud around the time of the Lucas critique ... which actually contradicts the SMD theorem unless you assume that an economy can be represented by an individual agent, per Kirman 1992:
Now if the behavior of the economy could be represented as that of an individual, ... [human utility maximizing behavior] would be saved, since textbook individual excess demand functions do have unique and stable equilibria. This is where the representative individual comes into the picture. By making such an assumption directly, macroeconomists conveniently circumvent these difficulties, or put alternatively, since they wish to provide rigorous microfoundations and they wish to use the uniqueness and stability of equilibrium and are aware of the Sonnenschein-Debreu-Mantel result, they see this as the only way out.
[3] These 'conspiracies' are:
  1. SMD theorem, dodged using a representative agent assumption
  2. Rational expectations aren't empirically accurate and prices are sticky, fixed by new ideas like e.g. rational inattention
  3. Utility is unobservable, so assume weak axiom of revealed preference

Monday, February 23, 2015

Information equilibrium paper (draft) (macroeconomics)


Since I apparently can't seem to sit down and write anything that isn't on a blog, I thought I'd create a few posts that I will edit in real time (feel free to comment) until I can copy and paste them into a document to put on the arXiv and/or submit to the economics e-journal (H/T to Todd Zorick for helping to motivate me).

WARNING: DRAFT: This post may be updated without any indications of changes. It will be continuously considered a draft.
Macroeconomics

Since the information equilibrium framework depends on a large number of states for the information source and destination, it ostensibly would be better applied to the macroeconomic problem. Below are some classic macroeconomic toy models (and one macroeconomic relationship): AD-AS model, Okun's law, the IS-LM model, and the Solow growth model. 

[To be added, the price level/quantity theory of money]

AD-AS

The AD-AS model uses the price level $P$ as the detector, aggregate demand $N$ (NGDP) as the information source and aggregate supply $S$ as the destination, or $P:N \rightarrow S$, which immediately allows us to write down the aggregate demand and aggregate supply curves

$$
P = \frac{N_{0}}{k_{A} S_{ref}} \exp \left( - k_{A} \frac{\Delta N}{N_{0}} \right)
$$

$$
P = \frac{N_{ref}}{k_{A} S_{0}} \exp \left( + \frac{\Delta S}{k_{A} S_{0}} \right)
$$

Positive shifts in the aggregate demand curve raise the price level along with negative shifts in the supply curve. Traveling along the aggregate demand curve lowers the price level (more aggregate supply at constant demand).


Labor market and Okun's law

The labor market uses the price level $P$ as the detector, aggregate demand $N$ as the information source and total hours worked $H$ (or total employed $L$) as the destination. We have the market $P:N \rightarrow H$ so that we can say:

$$
P = \frac{1}{k_{H}} \; \frac{N}{H}
$$

Re-arranging and taking the logarithmic derivative of both sides:

$$
H = \frac{1}{k_{H}} \; \frac{N}{P}
$$

$$
\frac{d}{dt} \log H = \frac{d}{dt} \log \frac{N}{P} - \frac{d}{dt} \log k_{H}
$$

$$
\frac{d}{dt} \log H = \frac{d}{dt} \log \frac{N}{P} - 0 = \frac{d}{dt} \log R
$$

where $R$ is RGDP. The total hours worked (or total employed) fluctuates with the change in RGDP growth (Okun's law).


IS-LM

The IS-LM model uses two markets along with an information equilibrium relationship. Let $p$ be the price of money in the money market (LM market) $p:N \rightarrow M$ where $N$ is aggregate demand and $M$ is the money supply.

We have:

$$
p = \frac{1}{k_{p}} \; \frac{N}{M}
$$

We assume that the interest rate $i$ is in information equilibrium with the price of money $p$, so that we have the information equilibrium relationship $i \rightarrow p$ (no need to define a detector at this point). Therefore the differential equation is:

$$
\frac{di}{dp} = \frac{1}{k_{i}} \; \frac{i}{p}
$$

With solution (we won't need the additional constants $p_{ref}$ or $i_{ref}$):

$$
i^{k_{i}} = p
$$

And we can write [note: this is a new take (here's the old take) on the constant $k_{i}$ that I've called $c$]:

$$
i^{k_i} = \frac{1}{k_{p}}  \; \frac{N}{M}
$$

Already this is pretty empirically accurate:


We can now rewrite the money (LM) market and add the goods (IS) market as coupled markets with the same information source (aggregate demand) and same detector (interest rate, directly related to -- i.e. in information equilibrium with -- the price of money):

$$
i^{k_i} : N \rightarrow M
$$
$$
i^{k_i} : N \rightarrow S
$$

Where $S$ is the aggregate supply. The LM market is described by both increases in the money supply $M$ shifts as well as shifts in the information source $N_{0} \rightarrow N_{0} + \Delta N$, so we write the LM curve as a demand curve, with shifts:

$$
i^{k_i} = \frac{N_{0} + \Delta N}{k_{p} M_{ref}} \exp \left( - k_{p} \frac{\Delta M}{N_{0} + \Delta N} \right)
$$

The IS curve can be straight-forwardly be written down as the demand curve in the IS market:

$$
i^{k_i} = \frac{N_{0}}{k_{S} S_{ref}} \exp \left( - k_{S} \frac{\Delta N}{N_{0}} \right)
$$


Solow growth model

Let's assume two markets $p_{1}:N \rightarrow K$ and $p_{2}:N \rightarrow L$:

$$
\text{(3a) }\frac{\partial N}{\partial K} = \frac{1}{\kappa_{1}}\; \frac{N}{K}
$$

$$
\text{(3b) }\frac{\partial N}{\partial L} = \frac{1}{\kappa_{2}}\; \frac{N}{L}
$$

The economics rationale for equations (3a,b) are that the left hand sides are the marginal productivity of capital/labor which are assumed to be proportional to the right hand sides -- the productivity per unit capital/labor. In the information transfer model, the relationship follows from a model of aggregate demand sending information to aggregate supply (capital and labor) where the information transfer is "ideal" i.e. no information loss. The solutions are:

$$
N(K, L) \sim f(L) K^{1/\kappa_{1}}
$$

$$
N(K, L) \sim g(K) L^{1/\kappa_{2}}
$$

and therefore we have

$$
\text{(4) } N(K, L) = A  K^{1/\kappa_{1}} L^{1/\kappa_{2}}
$$

Equation (4) is the generic Cobb-Douglas form. In this case, unlike equation (2), the exponents are free to take on any value (nor restricted to constant returns to scale, i.e. $1/\kappa_{1} + 1/\kappa_{2} = 1$). The resulting model is remarkably accurate:


It also has no changes in so-called total factor productivity ($A$ is constant). The results above use nominal capital and nominal GDP $N$ rather than the usual real capital and real output (RGDP, $R$).

Summary

We have shown that several macroeconomic relationships and toy models can be easily represented using the information equilibrium framework, and in fact are remarkably accurate empirically. Below we list a summary of the information equilibrium models in the notation

detector : source → destination

i.e. price : demand → supply. Also the information equilibrium models that do not require detectors are shown as 

source → destination.

The models shown here are:

AD-AS model

$$
P: N \rightarrow S
$$

Labor market (Okun's law)

$$
P: N \rightarrow H
$$
or

$$
P: N \rightarrow L
$$

IS-LM model

$$
(i \rightarrow p ) : N \rightarrow M
$$
$$
i : N \rightarrow S
$$

Solow growth model

$$
N \rightarrow K
$$
$$
N \rightarrow L
$$

Saturday, February 21, 2015

Expectations

Great Expectations. Wikimedia commons.

One of the ways in which I tend to diverge from 'mainstream' economics is in my treatment of expectations, broadly construed. I've talked about this before (see also here and here), and I was reminded of it by commenter LAL recently.

Menzie Chinn states the mainstream view succinctly in the first half of this sentence:
In point of fact, in modern macroeconomics where expectations of the future are central, the most important variables are often not observable.
Emphasis in the original. Chinn is defending the approach of using unobservable quantities in reference to a commenter Tom, and continues: 
So when one hears a criticism like that leveled by Tom, realize that taking such a criticism to its logical conclusion means that almost no macroeconomic discussion can proceed. Everything will have to have appended to it the adjective “estimated”.

This represents one branch of opposition to expectations: market expectations are unobservable, so should be left out of economic theory. Chinn's defense is that expectations can be estimated from other variables (e.g. the TIPS spread as a measure of inflation) -- and I agree. I have no objection to the use of unobservable theoretical constructs in theories. As a physicist, I have no problem using the quantum mechanical wavefunctions (unobservable) in calculations. The properties of the wavefunction (like its phase and amplitude) can be estimated through measurements, like the probability density of electrons striking a screen in the two-slit experiment.

My issue with expectations are different and two-fold ...

Arbitrariness: While there is a Schrodinger equation to define the unobservable wavefunction, there is no universally accepted constraints on what expectations are allowed to be. As long as one can come up with a plausible sounding argument ... what Noah Smith refers to as judgment calls ... any model of human beliefs is allowed into the model.

One can incorporate the latest theory from the study of human behavior in microeconomic settings, which is interesting. But more typically, it seems that macroeconomists just make up plausible sounding assumptions, put them in the theory and see what happens. There is nothing wrong with this as long as you make a connection to empirical data. If you don't, you could end up with a chameleon model [pdf]. As Pfleiderer defines them:
A model becomes a chameleon when it is built on assumptions with dubious connections to the real world but nevertheless has conclusions that are uncritically (or not critically enough) applied to understanding our economy.
I go a bit further than Pfleiderer and say that these problematic assumptions are almost entirely ad hoc assumptions about expectations. I once made a joke about the arbitrariness of models of expectations (calling them bad ad hoc).

Rational expectations (agents have the expectations produced by the model -- i.e. model-consistent expectations) are one way to deal with the arbitrariness, but just shifts the arbitrariness from the expectations to an arbitrariness in the model. Again, this is fine if you then test the model with empirical data.

Arbitrariness can always be solved with a dose of empirical success.

Centrality: This is where I am one of Nick Rowe's so-called people of the concrete steppes. This is also where I think macroeconomics is actually wrong and it can be shown with a couple of simple thought experiments.

First thought experiment: what happens to your theory if the market doesn't believe your theory? Look at market monetarism, for example. If markets didn't believe the theory, then it's a bunch of nonsense. Equation (3) at this link would be entirely determined by the undefined systematic error term. NGDP forecasts would be independent of central bank targets. Essentially, the whole framework falls down. Now it is possible this is how the real world works -- the only functioning economies are those that believe correct or approximately correct theories of markets. But I believe markets naturally arise without the economic agents even knowing what economics is.

Second thought experiment: let's assume there exists a fundamental theory of macroeconomics -- a macroeconomic theory of everything. Call this T0. Let's say, according to T0, fiscal policy has no effect on NGDP or inflation (e.g. monetary offset is the mechanism). Now let's say we live in a world that doesn't know T0, but rather has a 'Keynesian' theory TK in its collective head where fiscal policy has a positive multiplier; fiscal expansion (government spending) leads to increased NGDP and higher inflation.

Now what happens in this thought experiment when a large stimulus package like the ARRA is announced?
Does inflation rise to the level predicted by TK, i.e. the expected value, and stay there? No. That would contradict T0.  
Does inflation only rise initially according to TK, but then fall back to a value governed by T0? In this option, expectations cause market volatility, but have no impact on the long run.
So expectations can only have a limited impact under the assumptions made in the thought experiment: T0 exists and is unknown, TK is wrong. So how does one escape from this problem? By ignoring the premise of the thought experiment:

  1. Macroeconomics (or the market) is always right (T0 is known): there is no possibility of two theories T0 and TK. Markets cannot believe incorrect theories like TK. Only T0 and T0 expectations exist. This means you cannot know what the correct expectations are until you know T0 -- i.e. it assumes you already know the fundamental theory if you include expectations in your model.
  2. Rational expectations (assume TK is right): expectations must be model consistent, i.e. you can only have TK expectations in theory TK and T0 expectations in theory T0. This is a weakening of option 1 where more theories than just T0 are allowed. This is sensible, but since you can build models that do anything, expectations can do anything. Additionally, there is no way to argue against any particular model except empirically.
  3. Expectations are central (T0 is pure expectations, therefore TK = T0): there is no possibility of two theories TK and T0. If TK is the dominant theory, then expectations are TK and in fact the fundamental economic theory of everything is TK. T0 is then whatever the market believes, regardless of anything that is not expectations in your theory (such as measurements of the money supply). Concrete steps (like printing currency) are unnecessary. I've referred to this as model-independent expectations in the past. This implies that all you need to do is convince markets that Keynesianism is right to make Keynesianism right. Or you could convince markets that monetarism is right, which would make monetarism right. There is no reason to believe any particular theory so you can't logically suggest monetarism over Keynesianism, you can only put it forward as political preference.
  4. Markets are always right and expectations are central (T0 is known and T0 is pure expectations): this is the combination of 1 and 3 and is the method of e.g. market monetarism [1]. This not only assumes you are right, but additionally that everyone knows you are right. It effectively disallows the possibility in 3 that markets could be convinced of a different economic theory. However, like 3, it also does not require concrete steps.

These are all excellent options if you are extremely confident you are correct and don't care about comparing your model to empirical data. 

Overall, I'd sum up my problem with the centrality of expectations with a question to Scott Sumner: what happens to your theory if markets don't believe market monetarism? In a sense, this question cannot be addressed by market monetarism. The "market" piece presumes markets believe the theory (i.e. market expectations are consistent with market monetarism, i.e. assuming rational expectations in market monetarism ... I called this circular reasoning before, but charitably, this could be taken that market monetarism is the only self-consistent theory of expectations as I mention in a comment at that link).

I personally like to hold on to some skepticism of theory and believe that comparing to empirical data is a necessity. Therefore, while I consider expectations to have real effects in economies, I do not think e.g expectations set by a central bank can hold inflation on a 2% path indefinitely. In particular, economies that try will eventually fail. I am uncertain if the information transfer model is T0, so there is a possibility I will be wrong. And the only thing that happens if the market were to believe the information transfer model is that there would be a minor reduction in excess volatility.



Update 7/18/2015:

[1] In a later post, I realized Sumner's view of market monetarism is actually closer to option 3 above. I think (but am not sure) Nick Rowe's view is closer to 4 still, though.

Tuesday, February 17, 2015

Why do macroeconomists think they know what it's like to be a physicist?


Noah Smith and Scott Sumner decide to presumptively step into the shoes of a physicist. First, Noah:
String theory is something you hear Brian Greene or Michio Kaku talk about, and you think "Wow, neato, the Universe is mysterious and funky!", and then you never think about it again.
And then there's Scott:
Imagine I was debating a string theorist, and I told him the theory was a bunch of worthless nonsense, as it was not refutable. He might respond that I didn't know what I was talking about. And to be honest I would have to agree with him, I don't know what I'm talking about in the realm of string theory. And having once read someone who does, who also criticizes the theory for being unfalsifiable, doesn't change that fact.

I don't think many people would be insulted if you told them that they are not qualified to debate quantum mechanics, or biochemistry. But they do get offended when you tell them they are not qualified to debate macroeconomics. Why is that?

Perhaps because people can immediately recognize that fields like physics and biochemistry are way over their heads, but macro looks deceptively simple. ...
Of course, not being physicists themselves, Noah and Scott don't realize how often non-physicists decide to inveigh on the subject of quantum mechanics, string theory or the methodology of science. Say, people like ... Scott Sumner. Noah isn't much better -- having only an undergraduate education in physics. And don't get me started on the philosophers! As soon as someone at a party or out at a bar learns that I'm a physicist, they immediately try their hand at interpreting quantum mechanics. And I assure you people do get insulted when you tell them they have no idea what they are talking about.

Sure, it can be annoying when people don't bow to my superior credentials in this arena. But you know what allows us physicists to come out of these scraps with our egos intact, instead of asking why don't people take our PhDs seriously?

Because we actually do know what we are talking about.

[Take it away, Robert Waldmann.]

Seriously, we physicists can discuss at length with some really good arguments about why we think quantum uncertainty isn't just due to some unknown underlying deterministic theory. And that's a field where there is still new research!

In contrast, Scott Sumner wrote this blog post. I am not a macroeconomist, but I can assure you as a person well-versed in complex non-linear systems that his model is laughable. Yes, Scott calls it a "toy" model. In physics, the Ising model is a toy model (even Wikipedia says so). But the Ising model doesn't just say the magnetization is what it was in a previous time period plus a "magnetizationing"  force that moves the magnetization to what it is now. That is to say Scott's "toy" model is pure phlogiston. It quite literally says that NGDP is created by outputiston. That's not a toy model -- that's a question begging model (in the original sense of the phrase).

And Scott wonders why non-experts don't take his credentials seriously!

The area where Scott is most convincing is where he's not being a macroeconomist, but rather  an economic historian. He knows a lot about the Great Depression. He has a lot of good counterexamples from history where various (in particular, Keynesian) models fail. He is not a stupid person. But when it comes to explaining data? There is precious little there there.

I am not sure I quite get why Noah is deciding to be diplomatic (as Waldmann puts it). I mean Noah wrote this:
Unlike hard scientists, macroeconomists must spend considerable effort persuading people of the likability of assumptions. The assumptions that macroeconomists "like most" are the ones around which consensus forms.
That is Noah's restatement of Karthik Athreya's (a macroeconomist himself) view of macroeconomics. Emphasis mine. Yep. Debating which assumptions are like, totally, the awesomest. Noah also wrote this:
But how much do you smooth [with an HP filter]? That's a really key question! If you smooth a lot, the "trend" becomes log-linear, meaning that any departure of GDP from a smooth exponential growth path - the kind of growth path of the population of bacteria in a fresh new petri dish - is called a "[business] cycle". But if you don't smooth very much, then almost every bend and dip in GDP is a change in the "trend", and there's almost no "cycle" at all. In other words, YOU, the macroeconomist, get to choose how big of a "cycle" you are trying to explain. The size of the "cycle" is a free parameter.

Noah seems to think that you need to have a PhD in economics to see that this is intellectual garbage. But really, anyone with any kind of technical background can see this. The thing is that macroeconomics is in such a nascent state that this kind of problem definition is still a pretty important advance. The EMH (Fama) and failures of the EMH (Shiller) simultaneously won the Nobel prize recently. Talk about problem definition! And those really are advances in the field!

Maybe that is the problem? Do macroeconomists not realize that their field of inquiry is still so young that it's still in the "non-experts can make an important contribution" phase? Keynes' General Theory came out in 1936. Samuelson's book is from 1947. I'd liken these to Galileo and Newton, and not Feynman and Weinberg. Economics didn't even exist as an idea until around the time of Adam Smith's Wealth of Nations. We've known about electromagnetism at least since 600 BCE. That's about 2500 years between discovery and final theory (QED). In between 600 BCE and 1947 (and over 100 years after Newton), Faraday made many contributions to the field -- without any formal education in physics. I see myself arguing with the Hooke's and Halley's of macroeconomics, not the Witten's and Weinberg's.

Macro isn't string theory and the pronouncements of macroeconomists shouldn't be taken with limited argument from non-experts as the pronouncements of string theorists should [1]. String theory is built on quantum mechanics along with special and general relativity. There are no fundamental laws that are a part of macro. There isn't even a fundamental framework (quantum field theory is a framework) ... the closest I've seen is DSGE. And what do Scott and Noah think of DSGE? Ha!

It's that framework that gives physicists (and string theorists) this clout. Our PhD programs teach us that framework. Hamiltonian systems, Lagrange multipliers, partition functions, Noether's theorem. These are tools physicists (and chemists and engineers) use in separating the theoretical wheat from the chaff.

Even economists use Lagrange multipliers! Why can't a person with some undergraduate education hold forth on that?

What framework does an economics PhD (uniquely) teach that gives economists the right to go unquestioned by people who don't know that framework?

Footnotes:

[1] Even then, Scott's example where someone brings up the argument that string theory is untestable is a good argument! And non-experts can make that argument! And experts like myself can point to the string degrees of freedom accounting for the entropy of a black hole ... and not get all bent out of shape about it.

Thursday, February 12, 2015

Information equilibrium paper (draft) (introduction and outline)


Since I apparently can't seem to sit down and write anything that isn't on a blog, I thought I'd create a few posts that I will edit in real time (feel free to comment) until I can copy and past them into a document to put on the arXiv and/or submit to the economics e-journal (another work trip to the middle of nowhere provided some time in the evenings and H/T to Todd Zorick for helping to motivate me).

WARNING: DRAFT: This post may be updated without any indications of changes. It will be continuously considered a draft.

Title:

Information equilibrium as an economic principle

Outline:
  1. Introduction: Information theory, mathematical models of economics
  2. Basic information equilibrium model: Derive the equations [link]
  3. Supply and demand: Derive supply and demand, ideal and non-ideal, elasticity of demand [link, link]
  4. Other ways to look at the equation: generalization of Fisher, long run neutrality, transfer from the future to the present [link, link, link]
  5. Macroeconomics: The price level, changing kappa, liquidity trap, hyperinflation solutions, labor market (Okun's law), ISLM model (talk about P* model)
  6. Statistical mechanics: Partition function approach, economic "entropy" and temperature
  7. Entropic forces: Nominal rigidity, liquidity trap (no microeconomic representation)
  8. Conclusions: A new way to look at economics, does not invalidate microeconomics and re-derives some known results from macro, speculate about maximum entropy principle for selecting which Arrow-Debreu equilibrium is realized among the many

Introduction

In the natural sciences, complex non-linear systems composed of large numbers of smaller subunits, provide an opportunity to apply the tools of statistical mechanics and information theory. Lee Smolin suggested a new discipline of statistical economics to study of the collective behavior of economies composed of large numbers of economic agents.

A serious impasse to this approach is the lack of well-defined or even definable constraints enabling the use of Lagrange multipliers, partition functions and the machinery of statistical mechanics for systems away from equilibrium or non-physical systems. The latter -- in particular economic systems -- lack e.g. fundamental conservation laws like the conservation of energy to form the basis of these constraints.

Lee Smolin, Time and symmetry in models of economic markets arXiv:0902.4274v1 [q-fin.GN] 25 Feb 2009

In order to address this impasse, Peter Fielitz and Guenter Borchardt introduced the concept of natural information equilibrium. They produced a framework based on information equilibrium and showed it was applicable to several physical systems. The present paper seeks to apply that framework to economic systems.

Peter Fielitz and Guenter Borchardt, "A general concept of natural information equilibrium:
from the ideal gas law to the K-Trumpler effect" arXiv:0905.0610v4 [physics.gen-ph] 22 Jul 2014

The idea of applying mathematical frameworks to economic systems is an old one; even the idea of applying principles from thermodynamics is an old one.  Willard Gibbs -- who coined the term "statistical mechanics" -- supervised Irving Fisher's thesis in which he applied rigorous approach to economic equilibrium.

Mathematical models of economics: Fisher, Samuelson
Fisher, Irving. Mathematical Investigations in the Theory of Value and Prices (1892)
Fisher, Irving. The Purchasing Power of Money: Its Determination and Relation to Credit, Interest, and Crises. (1911a, 1922, 2nd ed)
  • quantity theory of money, equation of exchange
Samuelson, Paul. Foundations of Economic Analysis (1947)
  • Introduces Lagrange multipliers for economics
  • Le Chatelier's principle (general partial return to equilibrium)
  • Also cited Gibbs
The specific thrust of Fielitz and Borchardt's paper is that it looks at how far you can go with the maximum entropy or information theoretic arguments without having to specify constraints. This refers to partition function constraints optimized with the use of Lagrange multipliers. In thermodynamics language it's a little more intuitive: basically the information transfer model allows you to look at thermodynamic systems without having defined a temperature (Lagrange multiplier) and without having the related constraint (that the system observables have some fixed value, i.e. equilibrium).

Samuelson: meaningful theorems: maximization of economic agents = equilibrium conditions. This was a hypothesis from Samuelson. We don't need the equilibrium conditions (constraints), so we don't need to make this assumption, nor do we start from any notion of utility.

Samuelson didn't think thermodynamics could help out much more than he had shown:
There is really nothing more pathetic than to have an economist or a retired engineer try to force analogies between the concepts of physics and the concepts of economics. How many dreary papers have I had to referee in which the author is looking for something that corresponds to entropy or to one or another form of energy.
We hope this paper is neither pathetic nor dreary, however we do derive a quantity that corresponds to an economic entropy (actually, entropy production) of an economy that goes as $\Delta S \sim \log N!$ where $N$ is nominal output in section 6.

A word of caution before proceeding; the term "information" is somewhat overloaded across various technical fields. Our use of the word information differs from its more typical usage in economics, such as in information economics or e.g. perfect information in game theory. Instead of focusing on a board position in chess, we are assuming all possible board positions (even potentially some impossible ones such as those including three kings). The definition of information we use is the definition required when specifying a random chess board out of all possible chess positions, and it comes from Hartley and Shannon. It is a quantity measured in bits (or nats), and has a direct connection to probability.

This is in contrast to e.g. Akerlof information asymmetry where knowledge of the quality of a vehicle is better known to the seller than the buyer. We can see that this is a different use of the term information -- how many bits this quality score requires to store is irrelevant to Akerlof's argument. The perfect information in a chess board $C$ represents $I(C) \lt 64 \log_{2} 13 \simeq 237$ bits; this quantity is irrelevant in a game theory analysis of chess.

Akerlof, George A. (1970). "The Market for 'Lemons': Quality Uncertainty and the Market Mechanism". Quarterly Journal of Economics (The MIT Press) 84 (3): 488–500.Hartley, R.V.L., "Transmission of Information", Bell System Technical Journal, Volume 7, Number 3, pp. 535–563, (July 1928).
Claude E. Shannon: A Mathematical Theory of Communication, Bell System Technical Journal, Vol. 27, pp. 379–423, 623–656, 1948. 

We propose the idea that information equilibrium should be used as a guiding principle in economics and organize this paper as follows. We will begin by introducing and deriving the primary equations of the information equilibrium framework, and proceed to show how the information equilibrium framework can bee understood in terms of the general market forces of supply and demand. This framework will also provide a definition of the regime where market forces fail to reach equilibrium through information loss.

Since the framework itself is agnostic about the goods and services sold or the behaviors of the relevant economic agents, the generalization from widgets in a single market to an economy composed of a large number of markets is straightforward. We will describe macroeconomics, and demonstrate the effectiveness of the principle of information equilibrium empirically. In particular we will address the price level and the labor market where we show that information equilibrium leads to well-known stylized facts in economics. The quantity theory of money will be shown to be an approximation to information equilibrium when inflation is high, and Okun's law will be shown to follow from information equilibrium.

Lastly, we establish an economic partition function, define a concept of economic entropy and discuss how nominal rigidity and so-called liquidity traps may be best understood as entropic forces for which there are no microfoundations.


Okun, Arthur, M, Potential GNP, its measurement and significance (1962)

Sunday, February 8, 2015

A Socratic dialog on the simple model of information equilibrium

Here is the dialog I mentioned in the footnotes of this post ...

...

Oikomenia: Wait, wait ... what about diminishing marginal utility? What about the stuff they show in those MR university videos?? Supply and demand can't be as simple as taking the equation (total cost) = (gallons sold) x (price per gallon) and holding either total cost or gallons sold constant and letting the other vary! That's dumb!

Informatio: What specifically is wrong with it?

Oikomenia: Well, for one, the total sale isn't the quantity demanded. And you need to account for the observed fact the number of gallons of gas demanded goes down as the price goes up, all else being equal, not the total price of the gas supplied.

Informatio: We measure aggregate demand as the value of goods sold -- NGDP. And the quantity sold (equal to the amount demanded) does go down as the price goes up.

Oikomenia: Touche on NGDP. But in your model demand falls as the price goes up because you've fixed the relationship (total sale) = (gallons sold) x (price per gallon), not because of people decide to spend their money on other stuff than gasoline or can't afford the increase.

Informatio: Actually, all I said was the two numbers (gallons) and (total sale) were in information equilibrium. And now the price of gasoline in your story depends on the prices of all other goods -- and those change depending on location. The demand curve and hence the price of gas in Seattle depends more on the price of salmon than the price of gas in Houston?

Oikomenia: Yes! But the influence is so miniscule relative to the total basket of goods we consume that it's hard to measure directly.

Informatio: Then how do you know it's there?

Oikomenia: Because that's how incentives work.

Informatio: So what you are saying is that you assume a human behavioral relationship between price and demand that implies something you can't measure? And then you take the observed relationship between demand and price as the model of that human behavior?

Oikomenia: Well, it sounds a bit circular if you put it like that.

Informatio: The demand curve slopes down because of human behavior. And we know what that human behavior is because the demand curve slopes down? What would you have done if it sloped up?

Oikomenia: That's a Veblen good! In that case, higher price makes it more desirable.

Informatio: So you have a human behavior explanation for a positive or negative slope of a demand curve?

Oikomenia: Another possibility is a Giffen good ... anyway, your model implies the price elasticities of supply are all one. The relative change in the quantity supplied is exactly equal to the relative change in price.

Informatio: That's because I chose the simplest case of information equilibrium where the units of information are the same for both numbers. I could change the units and get any price elasticity of supply you threw at me.

Oikomenia: Won't that change the total sale = gallons sold x price per gallon equation? That equation implies an elasticity of one.

Informatio: The equation does change.

Oikomenia: Then that means your reasoning for supply and demand has to change!

Informatio: Not really. The equation gets more complicated, but the essence of it is captured in the simple case. In order for two numbers to be in information equilibrium where variations in one show up in a deterministic way in the other, they basically have to be proportional in logarithm.

Oikomenia: So supply and demand for gasoline are just the result of keeping the numbers for the total sale and gallons sold in information equilibrium? There's no human behavior in it? I find that hard to believe.

Informatio: Do markets always behave like textbook cases of supply and demand?

Oikomenia: No, there are all kinds of market failures and behavioral effects.

Informatio: That's your answer. When markets work like economics 101, it's because human behavior isn't having an impact. It averages out. All you are left with is some basic mathematics of how two numbers can register the same amount of information.

Oikomenia: Whoa. You are assuming supply and demand represent more of a Platonic ideal than we economists do!

Informatio: Hahaha! I guess you're right! As markets become more ideal, they lose more and more of their dependence on the specifics of human behavior. That's how economics can be so mathematical, but be entirely about the behavior of human beings.

Oikomenia: But that also means that when we see there is human behavior affecting the outcome in empirical research and experiments, then your model is wrong. If you do surveys and find that people decide not to buy something because the price went up, then your reason -- it's just information equilibrium -- is wrong.

Informatio: How do you do an experiment where you cause a market price to go up? If you directly influence the price, then it isn't a market price anymore. If you increase demand, the this supposed fall in demand is from an artificially created higher demand. And if you reduce supply, there is actually less to go around, so somehow some people have to buy less of it, regardless of their feelings!

Oikomenia: But what about Vernon Smith's experiments?

Informatio: He gave his participants defined utilities in terms of numbers. If you assume a utility function behaves like a number you've forced the system to exhibit supply and demand. The use of numbers enforces consistent preferences ... because numbers have a total order. You can't have A < B, B < C and C < A with numbers.

Oikomenia: Yes, yes. Afriat's theorem. Revealed preference or transitivity alone don't result in diminshing marginal utility.

Informatio: They do if the numbers you chose for utility are dollar amounts that are the unit account!

Oikomenia: What about natural experiments? Like when one state raises its minimum wage and another doesn't? You can survey the owners of businesses and ask if they had to layoff workers, or not hire workers as planned.

Informatio: Ask business owners if raising the minimum wage made them hire fewer workers than they planned to? While you're at it, you should ask car company executives if higher CAFE standards will hurt their business.

Oikomenia: You can get data on their hiring, revenues and expenses.

Informatio: Even in mainstream economics you can't draw any conclusions from that natural experiment -- you have no way of knowing whether or not your system jumped from one Arrow-Debreu equilibrium to another.

Oikomenia: But if we observe that employment or hours worked go down and prices go up, then that is evidence that the supply and demand model is correct.

Informatio: Yes, that is true ... regardless of whether that supply and demand model is derived from human behavior, information theory ... or even if it was just made up by Alfred Marshall.

Oikomenia: Gah!! Isn't obvious? If bacon gets more expensive at the store, you won't buy as much of it because you don't think it is worth it at that price!

Informatio: Yes, if bacon went to 50 bucks a pound, I'd probably use it less often. But if it goes from 6 to 7 bucks? If I'm making bacon miso soup, then I'm buying bacon. The price of bacon goes up because there is more demand or less supply and in either case bacon becomes more scarce per person, so fewer people are going to buy it, whatever their reasoning. Sometimes they're sold out of the bacon that I like. I didn't buy less of it because it was too expensive or because I'd rather spend money on something else. It wasn't there. Sometimes bacon isn't there and so I forget that I need it, and don't end up stopping by another store on the way home. So, yes, at 50 bucks there is probably a human behavior component, but for smaller changes near equilibrium there are so many things that could be going on. The economics 101 approach to the supply and demand curves would have to take into account the fact that if the supply goes down, some stores won't get stocked with enough bacon. And that someone seeing two packages left, takes only one because someone else might want some. And then there are the people that would take both. And then there are the people who go back and forth between those behaviors depending on the song playing in the store. And then there are grocery stores that randomly raise the price of bacon because they know people don't check that often and will buy it out of habit. A market is so complicated from a human behavior perspective, that it's best to be agnostic.

Oikomenia: Bacon miso soup? Gross.

Informatio: Were you listening to me?

Oikomenia: Not really. You seemed to be on a roll there, though.

Informatio: [Sigh] The thing is that all of that complex human behavior, the basic concept of supply and demand ... Adam Smith's invisible hand ... seems to work in the real world as a good first order approximation. That's pretty amazing to me.

Oikomenia: I agree, it's pretty cool.

Informatio: I don't think you believe it is as cool as I think it is.

Oikomenia: What is this, grade school?

Informatio: No, really. You keep wanting to describe supply and demand with some simplistic model of human behavior that's all about utility maximization. It's like using a tic-tac-toe playing computer program instead of HAL 9000 or Lt. Cmdr. Data. And you're saying it's pretty cool that supply and demand works in a market made of tic-tac-toe programs. But I'm saying it's pretty cool it works with humans!

Oikomenia: Yes! Incentives matter, and humans do work like utility maximizers a lot of the time. The theory works well for certain problems and you want to throw it away.

Informatio: You're right, humans as utility maximizers does work in many cases. And maybe utility maximization is a good theory to use in particular markets or situations. But its assumptions are wrong in general. And the fact that supply and demand works even with complicated humans makes me think that there is a much more general principle at work than utility maximization ... something like information equilibrium.

Oikomenia: I'm not sure I follow you. Both utility maximization and information equilibrium lead to supply and demand -- how can information equilibrium be "better"? As you said earlier, the mechanics of the supply and demand diagram are the same.

Informatio: It's better in that it doesn't make any assumptions about human behavior. No rational expectations, no utility maximization. Instead of taking utility maximization as your fundamental principle and looking for violations and then adding new behavioral theories for those violations, you can start with information equilibrium as your fundamental principle. The particular violation is specified by  the model -- non-ideal information transfer results in a price that is lower than the ideal price. That's when human behavior matters. Maybe figuring out why the price is lower than the ideal involves utility maximization over multiple markets. Maybe it is some behavioral theory such as prospect theory.

Oikomenia: That just seems like a methodological change. There isn't a change in the content of the theory.

Informatio: But there are some changes. Expectations set by a central bank can't hold onto an inflation target forever in the information transfer model. If you start from some rational expectations theory, then undershooting inflation is due to a lack of credibility of your central bank, that the undershoot is the real target of a credible central bank or some other kind of de-anchoring. Starting from a perspective of human behavior makes you not only see a puzzle but assume its conclusion ... and neither the puzzle nor its solution are true.

Oikomenia: I don't agree with your example ... look at Canada ...

Informatio:  You said before that there are experiments and empirical research where human behavior appears to affect the outcome?

Oikomenia: Feels like hours ago.

Informatio: Well, maybe your assumptions of the baseline human behavior were wrong? Then your assumptions might lead you to see a human behavioral effect where there isn't one!

Oikomenia:  I think I see what you are saying. If you start with particular human behavior assumptions, then you have to see the violation as a puzzle and come up with a solution that references your original assumption. That is a good point. So does that mean you see utility maximization as a particular expansion around information equilibrium, rather than around, say, and Arrow-Debreu equilibrium?

Informatio: That's kind of it. It's more like an information equilibrium is more a general kind of equilibrium, and an Arrow-Debreu equilibrium is a particular case where utility maximization is a good theory to use to look at the fluctuations. An economic equilibrium might be an information equilibrium and an Arrow-Debreu equilbrium. It might just be an information equilibrium. I don't think it is possible for an economic equilibrium to be an Arrow-Debreu equilibrium and not an information equilibrium, but I haven't proven that yet. But that's why I don't think you need human behavior to come up with the basics of economics like supply and demand, or the quantity theory of money.

Oikomenia: The quantity theory? Not this again ...

Informatio: It's not really the quantity theory, I just say that as a shorthand for economists like you to get a basic picture in your head. In most cases, printing a lot of money should lead to inflation, right?

Oikomenia: Not in a liquidity trap.

Informatio: I'm thinking more like Zimbabwe. There's a liquidity trap in the information transfer model, too.

Oikomenia: A liquidity trap without human behavior!!?

Informatio: [Sigh] Yes, it's an emergent property ... it's an entropic force holding prices down.

Oikomenia: I just got a text message! Oh, dear. I have to go ... I'll talk to you later!

Informatio: [Sigh]