Friday, January 30, 2015

Is the demand curve shaped by human behavior? How can we tell?

I'm in the process of writing (yet another) simple introduction to the information equilibrium view of supply and demand, but stumbled onto an issue that -- while I may be wrong about it -- really seems to fly in the face of basic economics. I asked myself the question: how would we go about determining the shape of the demand curve -- especially in a way that let's us see human behavior at work?

You might think experiments would help here. However e.g. Vernon Smith's approach assumes utility. If you give your agents utility functions that obey the conditions of the Arrow-Debreu theorem, then an equilibrium must result from just the pure mathematics of it, along with the mechanics of supply and demand -- regardless of human behavior. This is basically just a restatement of the idea that assuming homo economicus (by giving people well-defined utility functions) effectively implies ideal markets.

I started to look at some classroom experiments ... and saw that they don't actually demonstrate what they set out to demonstrate.

Take this experiment [1] for example (it is not unusual). The idea is that students write down a reservation price and the instructor collects the cards and tallies up the number that would buy at a given price (or they all stand up and sit down as the price called out gets too high in this version [2]). As the price goes up, the number of students willing to pay goes down. Makes sense.

But is this measuring a demand curve (i.e. things like diminishing marginal utility)? No. And it is especially clear in [1] if you look at their graph. It's not a demand curve, it's an inverse survival curve for a normal distribution:

That is to say it's a cumulative distribution function of a normal distribution turned on it's side (see the second graph here). What this is measuring is the (normal) distribution of price guesses from the students:

It is especially telling that in experiment [2] above, they leave off the last few students -- i.e. the last piece of the CDF where it stops being linear.

This doesn't have anything to do with human behavior. Why is that? Because I can get the exact same "demand curve" using brainless atoms. The contribution to the pressure from one atom is based on the force it exerts against the container -- the change in momentum as it reflects off the wall. That change in momentum is proportional to its velocity, and in an ideal gas, the atoms follow a Maxwell distribution:

If we asked atoms to sit down as different velocities were called out if the velocity was higher than theirs, we'd get the following "demand curve":

I used this example since both price vs demand and pressure (velocity) vs volume come from the same derivation in the information transfer model and neither require human behavior to explain.

Now you might say that students' knowledge of the average price of M&M's (in the example) shows how human behavior enters the equation; they see the value of other goods and make utility judgments. But! Atoms also seem to 'know' the average velocity of the ideal gas in the analogous experiment -- set by the thermodynamic temperature. The students know the value of a packet of M&M's because it is set by the value of money (perhaps set by an economic temperature) -- something controlled by the central bank in most economic models.

So how do we see a demand curve in a way that incorporates human behavior?

In the MR University video, after using 'Black Friday' as an example (which is actually the experiment discussed above), they move on to describing it in terms of substitution. That is definitely human behavior, right? We decide to buy other things with our money!

Well, actually ... how is that different from the experiment above? When you have an estimate of the price of M&M's and the price goes above that reservation price you are effectively making the statement "I'd rather spend my money on something else" (or "I don't have that much money" in some cases). Something being "too expensive" and opting to save the money for something else are logically equivalent statements.

Now you might say that in the case of atoms when things get "too expensive" (too high a velocity) it's because they "can't afford it" (their velocity is all they have), not because they've decided to keep their "money" (velocity) for something else.

And that would be true ... for a single container with an ideal gas. But multiple markets is like multiple containers (with the same number of atoms**, i.e. students) at different temperatures (i.e. prices). So while 20% of atoms have a velocity of at least 1 in one market, 20% will have a velocity of more or less in another, corresponding to 'money' they'd 'spend' on something else.

So, again, how do we see a demand curve in a way that incorporates human behavior?

I'm probably missing something. There could be other experiments*** that show human behavior shining through. Just because I don't know what they are doesn't mean they don't exist.

There is an ulterior motive here, and it's not just that I think starting with humans as optimizing agents is likely not only intractable, but unnecessary. It's that in writing that simple introduction I mentioned at the top of this post I realized that the information transfer model, in an ideal market, has literally nothing to do with the behavior of the agents. Supply and demand are a property of two quantities that are in information equilibrium ... and the mechanics follow**** from D = κ P S. Hold D constant and as S goes up, P must fall (a demand curve). Hold S constant and as D goes up, P must go up (supply curve).

That's all there is ... and if that's all there is ...


** We are glossing over the the fact that we have the capability to distinguish different people and e.g. assign a particular price estimate in each market to a particular person -- something we can't really do for identical atoms. However, there is the question of whether the market can see people as distinguishable ... using money makes transactions anonymous.

*** You might think of an experiment where you reduce the supply and watch how the price goes up and take a survey and ask why people decided not to buy something. However 1) the mechanism you are using assumes supply and demand, and 2) since you are reducing the supply, some people will have to buy less regardless of how they feel about it. (Humans are subject to post-hoc rationalizations, so the survey would be suspect, anyway.)

**** You can get different shaped demand curves from this equation -- it's actually just an instantaneous equation and P is a derivative (dD/dS).

Tuesday, January 27, 2015

This is why sociologists think economists are arrogant

Tyler Cowen cites a study that makes claims about inflation and tolerance of the LGBT community
On the other hand, the data shows that when a society has impressive scores on property rights security and low inflation — two other components of economic freedom indexes — these characteristics are strongly and positively correlated with tolerance of gays. It’s possible that low inflation, and the behavior of a central bank, are stand-ins for the general trustworthiness of a nation’s government and broader institutions, and such trustworthiness helps foster tolerance.
That conclusion kinda depends on low inflation being the result of the actions of institutions, doesn't it? That implies a monetarist view, but even if you allow the word 'institutions' to be more inclusive of e.g. a 'fiscal theory of the price level', that still implies some sort of control by national elites at the point of low inflation.

Countries mired in liquidity traps are on average more tolerant? That makes the question seem even weirder. And for market monetarists, a liquidity trap is a sign of incompetent institutions (namely the central bank).

In the information transfer model, low inflation is generally a result of a monetary policy regime being around for a long time. All economies tend toward lower inflation in the long run (see the graph at the top of this post).

That leads us to the conclusion that basically long periods of time with economic stability lead to tolerance.

Which is something well known in sociology.

Simon Wren-Lewis has a good piece on economists and sociologists. There was also this from Stephen Williamson. But it's the armchair sociology that derives from economic analyses that is problematic.

Lee Smolin's take on Arrow-Debreu

I've been reading Lee Smolin's (of loop quantum gravity fame) take on Arrow-Debreu equilibrium (mentioned by Tyler Cowen awhile ago), and I was struck by how much we are looking at the economic problem the same way. It's probably just our shared backgrounds in particle physics. His take is a lot more sympathetic to the idea that gauge symmetries and other mathematics may be of use, something I would probably share if the information transfer model didn't seem to point to non-ideal information transfer or spontaneous drops in entropy from time to time [1]. Anyway, I've put together (mostly just for my own notes) a collection of quotes from Smolin's paper with something similar I've said on this blog. I follow the quotes with square brackets that get at the differences between what we are saying.

Let's go ...

Smolin: We are then interested in the simplifications that may happen in limits of large numbers, such as a large number of agents, or of goods. In these limits there may be universality classes that only depend on a few parameters that characterize the behaviors of the human actors that comprise the economy

Me: In reality, there may be a detailed balance that keeps the equilibria in an equivalence class described by e.g. a given NGDP growth rate. But that's the rub! Macroeconomics is the study of the behavior of those equivalence classes, not the instances of them!
[This is basically the same thing -- universality class is a particular kind of equivalence class.]
Smolin: The observables of economics are accounting and other records. One should then try to construct a theory of economics that involves only observables. The importance of this kind of operational principle in physics and other sciences have been paramount. Let’s see what it can do for economics. This restricts attention to what is actually measured by companies, individuals and governments. And it removes from consideration fictional elements that have nothing to do with how real economies work such as fixed spaces of products, fixed production plans, utility functions etc

Me: Game theory is a representation of economic microfoundations; the information transfer framework makes as few assumptions about microfoundations as possible. We are assuming we don't know any game theory. ... The following is a rough sketch, but one way to think about the information transfer model is as an ensemble of games with random payoff matrices (zero-sum and not) with players repeating games, switching between games and possessing some correct and/or potentially incorrect information about the payoff matrix (or its probability distribution). The only "constraint" is that all of the realized payoffs sum up to a macroeconomic observable like NGDP.
[Where Smolin says leave out unobservable things like utility, I say assume you know nothing about them -- in this case a payoff matrix in game theory is essentially a set of contingent utility functions. Expectations also seem unobservable in this sense.]
Smolin: An analogy to physics might be helpful here. Just like there is micro and macro economics, there is micro and macro physics. The former is atomic physics, the latter includes thermodynamics and the description of bulk matter in different phases. Macrophysics mostly deals with matter in equilibrium. The bridge between them is a subject called statistical mechanics, which is a general study of the behavior of large numbers of atoms, both in and out of equilibrium. Indeed, even though there is not a close analogy between the notions of equilibrium in economics and physics, there is clearly a need for a subject that might be called statistical economics. It would be based on a microscopic model of the basic agents and operations or processes that make up an economy and study the behavior of large numbers of them in interaction.

Me: ... assume the principle of indifference: given the macrostate information you know (NGDP, price level, MB, unemployment, etc), assume the system could be in any microstate consistent with that information with equal probability [2]. In Bayesian language, this is the simplest non-informative prior. This way lies statistical mechanics, thermodynamics and information theory.
[This is the same idea.]
Smolin: Furthermore, since equilibria are in general non-unique, there is no mechanism in the theory to explain why one rather than another of these equilibria could be chosen by the market mechanism. All we know about the market mechanism in the theory is that it looks for Pareto efficient states, but if there are many the market mechanism cannot choose among them.

Me: ... imagine a world where fuel was slightly more expensive and cars were slightly less expensive. Depending on the relative price this could still clear the market with the same amount of money being spent in aggregate on cars and fuel. ... However! If there is less fuel and more cars, then there might be fewer ways to associate gallons of fuel with cars (lower entropy since the fuel is fungible) and you could select the actual equilibrium based on maximum entropy production.
[Maximum entropy selects which Arrow-Debreu equilibrium; this quote represents a possible solution the problem in the quote from Smolin.]
Smolin: Time must be incorporated in a way that recognizes the irreversibility of most actions taken, as well as the asymmetry of the present, past and future.

Me: Basically, because the market is made of people, we can violate the second law of thermodynamics (ΔS > 0) by coordinating ourselves in a way that atoms or particles can't. There is no second law of econo-dynamics because of human behavior -- which is unfortunate because otherwise (if human nature didn't matter) ΔS > 0 would imply ΔNGDP > 0 -- the economy would always grow (absent real shocks like natural disasters or resources running out).
[The second law holds most of the time as there is usually economic growth; a recession represents moving backwards. My arrow of time in economics is essentially the thermodynamic arrow of time, with some exceptions during recessions. Entropy producing processes are irreversible processes. This is another case where what I am saying is a solution to a problem stated by Smolin.]
Smolin: Markets with large numbers of agents have very large approximate symmetries, expressing the fact that there are many individuals with similar educations, interests or aspirations and many firms competing to offer similar products and services. In the steady states reached by real economies these symmetries are usually broken. This leads to multiple equilibria or steady states. The representation theory of the broken symmetries is then relevant to the distribution of equilibria.

Me: If your macro system appears to be described by n << N degrees of freedom, then it seems highly likely that among the total number of microstates, large subsets of the microstates are going to be described by a given macro state -- i.e. the equilibrium (the microstate satisfying macro constraints) is not going to be unique. For example, in an ideal gas, you can reverse the direction of the particle velocities and obtain another equilibrium (actually, all spatial, rotational and time-reversal symmetries lead you to other equilibria).
[There is a slight difference here in that Smolin is being much more accurate from the physics perspective. Equilibria related to each other by a symmetry transformation are not actually distinct (e.g. reversing the particle velocities) in physics. The sense here is that you could exchange iPads for Nexus tablets in theory, but the actual dominance of iPads breaks the symmetry leading to two equilibria: one where the Nexus dominates and one where the iPad dominates.]

[1] Smolin writes: ... we want to know far from ideal a real economy may be, and still count as evidence for the theory. For example, lets take the prediction that all markets clear in equilibrium. There are clearly lots of markets in the real world that do not perfectly clear. In the information transfer model, we have a theory that tells us the 'neoclassical' view holds if the system is in information equilibrium (we don't have episodes of non-ideal information transfer). Essentially, if the information transfer model holds assuming ideal information transfer, we have evidence for the ideal neoclassical theory -- except for the emergent aspects of macro like liquidity traps and nominal rigidity.

Monday, January 26, 2015

How do you measure the price level?

econ: PCE or CPI? Core or headline?
info: [Sigh] ...
econ: What?
info: They're barely different from each other!
econ: Isn't there a joke about a physicist and a spherical chicken?
That's from me a few days ago. I've been on an anti-utility kick lately and the process, I've been looking at a bunch of stuff on preferences, total orderings and ... differential geometry. Smolin's paper mentioned Malaney and Weinstein's gauge theory approach to price indices. I plan doing a bit more thinking about what Smolin says -- he thinks a non-equilibrium statistical mechanics approach to economics may be useful (like the one I am advocating on this blog), saying: Nonetheless, once one gets past this confusion [about thermodynamic equilibrium], there is still a cogent claim that non-equilibrium statistical mechanics may be a basis for a model of an economy. But for now, I'm going to concentrate on the gauge theory approach to the so-called index number problem.

I seriously dislike the name, because it makes me think of some kind of deep problem in number theory or the Atiyah-Singer index theorem. Oh, well. I don't have much of a choice there. Let's start with Wikipedia:
The "index number problem" refers to the difficulty of constructing a valid index when both price and quantity change over time. For instance, in the construction of price indices for inflation, the nature of goods in the economy changes over time as well as their prices.
Wikipedia continues: "There is no theoretically ideal solution to this problem."

In fact there may be, based on Malaney and Weinstein's differential geometry approach. In her thesis, she motivates an 'economic derivative' that is a covariant derivative. You may remember that this gauge theory approach was part of Chris House's rant about physicists in economics. I opined on this blog, and asked the question that should be asked of any mathematical approach to a problem: what is it good for?

Well, Malaney's approach allows you a way to solve the index number problem ... assuming you buy into the other assumptions. The idea is that a salary $S(t)$ that is always proportional to the 'correct' price index $P(t)$ (with constant of proportionality $\alpha$) should have the same purchasing power, which allows the construction of a covariant derivative

\frac{D}{Dt} = \frac{d}{dt} - \frac{d}{dt} \log P 

So that

\frac{D}{Dt}S = \frac{D}{Dt} \alpha P = 0

This choice what you mean by 'constant' (i.e. derivative equal to zero) solves the index problem, because it essentially makes all of the price indices equivalent to a so-called Divisia index. Don't let the Divisa terminology worry you. Essentially, various chained indices or indices with changing baskets of goods are approximations to a Divisia price index.

The problem economic theorists would have with this is that a Divisia price index is path dependent brought about by changing preferences. Basically, what makes the Divisia price index problematic is the issue of mapping it to utility in the sense that John Kay discusses here.

Would you rather be the richest person in history or live in a time of antibiotics? Divisia answers with the former if you equate the 'inflation adjusted' value of goods with utility as defined in ethical philosophy. The path dependence is obvious here -- because Rothschild would likely value antibiotics today. Malaney argues that this path dependence may be a desireable property in her thesis, but overall, the question of how money equates to utility will probably always be a philosophical one.

As an aside, Noah Smith talks about unstable preferences here. However, the mapping of money and price indices to information lacks these philosophical issues. I've talked about this before -- inflation is better understood as an information theoretical construct rather than a utilitarian philosophical one.

The new bit here is that I want to show that the Divisia price index is the proper measure in the information transfer model, just like in the differential geometry approach of Malaney and Weinstein. The underlying assumption is different, though. In our case, a price index isn't measuring 'constancy' (in Malaney's thesis), but rather 'meaningful aggregation'.

The question we ask is whether there exists a consistent price level $P$ that allows us to treat the economy as aggregate supply $Q$ and aggregate demand $N$ when $P$ is made up of individual prices $p_{i}$ (and likewise $q_{i}$ and $n_{i}$).

If we start with (taking $\kappa = 1$ WOLOG)

N = Q P

Then taking the logarithmic time derivative (this gives a percentage rate when multiplied by 100), we get

\frac{d}{dt} \log QP = \frac{d}{dt} \log Q + \frac{d}{dt} \log P

= \frac{1}{Q}\frac{dQ}{dt} + \frac{1}{P}\frac{dP}{dt} \;\;\;\text{ (1)}

However, if we take the individual markets (I'll use vector notation to simplify the equation a bit)

q \cdot p \equiv \sum_{i} q_{i} p_{i}

\frac{d}{dt} \log q \cdot p = \frac{1}{q \cdot p} \frac{d}{dt} q \cdot p

\frac{d}{dt} \log q \cdot p = \frac{1}{q \cdot p}  \left( \frac{dq}{dt} \cdot p + q \cdot \frac{dp}{dt}\right)

\frac{d}{dt} \log q \cdot p = \frac{1}{q \cdot p} \sum_{i}  \left( \frac{dq_{i}}{dt} p_{i} + q_{i} \frac{dp_{i}}{dt} \right) \;\;\;\text{ (2)}

The Divisa price index $P$ is defined by separately equating the two terms of (1) and (2) and solving the differential equations:

\frac{1}{Q}\frac{dQ}{dt} = \frac{1}{q \cdot p} \sum_{i}  \frac{dq_{i}}{dt} p_{i}

\frac{1}{P}\frac{dP}{dt} = \frac{1}{q \cdot p} \sum_{i}  q_{i} \frac{dp_{i}}{dt}

In the information transfer model, that means $P$ allows you convert between informationally equivalent baskets of goods. That is to say that a basket from 2014 that includes an iPad and one from 1980 that does not can potentially be equivalent in information.

However, an intervening redefinition of money or a monetary regime change will break the direct equivalence between $P$ for different times. That is to say Rothschild's and Kay's utility cannot be compared because they lived in different monetary regimes (a gold standard and fiat currency regime), even if both used 'pounds sterling'.

Interestingly, non-ideal information transfer where $N \geq Q P$ does not affect this derivation -- we are equating the RHS of equations when we equate (1) and (2); both are on the same side of that inequality.

Anyway, more on the Smolin article to come.


I kind of glossed over the role of the changing information transfer (IT) index (κ) in the more accurate model of the price level in terms of the money supply. In this post, we used an aggregate supply (Q) rather than the money supply (M), so I don't think there should be a time changing IT index in that case. In a sense, PQ = N represents both the expenditure method and income method of calculating NGDP which should be equal (and should have κ = 1).

Saturday, January 24, 2015

Keynesian economics in three graphs

In its most general form, Keynesian economics is the idea that government spending can buck the trend of other forms of spending without causing changes in other forms of spending (like crowding out). Essentially, it can be a counter-cyclical force for growth.

This comes to the forefront in the information transfer model. If we represent the economy as made up of many random markets (as I discussed here and originally modeled here), we have a picture like this:

Each square represents an "economic growth state" of a particular firm or market. The distribution is centered around the average growth rate of the macroeconomy. In this picture, the government (G) represents a large one of these squares (about 10-30% of the economy, depending on the country), and most of the time, its growth rate is about the average:

As a single entity (in the simplest model) it isn't subject to the entropic forces maintaining the distribution and so can cause the average growth rate to increase by moving right towards increased growth by political fiat:

The crowding out argument is that the other boxes will move left toward lower growth. However that represents a coordination problem. The remaining smaller boxes can't organize themselves to over-represent the lower growth rates -- that would require the thermodynamics equivalent of a spontaneous decrease in entropy. Now it could trigger a panic that can produce a spontaneous decrease in entropy, but that seems like a strange model (it is similar to the idea of Ricardian equivalence ... but in this model, it is a psychologically irrational force, not a result of rational expectations of future tax increases).

Also, the idea of the Keynesian multiplier is that not only does G go up, the new distribution moves to restore a maximum entropy (equilibrium) state. However, moving G does move the average even if the other markets don't follow it.

Here's a really great analogy: heating leftovers in the microwave. When your food comes out of the refrigerator, it's about 55 °F and the molecules have an average kinetic energy of kT where T is the temperature (in Kelvin), but are distributed according to a Maxwell-Boltzmann distribution (the analog of the distributions above). When you microwave food, you excite rotational modes of water molecules -- effectively picking them out of the distribution for additional energy (like raising G). The other molecules eventually come into thermal equilibrium with the new average kinetic energy kT' where T' > T

Ricardian equivalence would be the rather silly argument that the other molecules decide to get colder on seeing the water molecules heat up (out of spite, I guess).

Scott Sumner: Data mangler.

Scott Sumner has apparently chosen to die on the hill of budget sequestration. He claims that the lack of a recession is vindication of market monetarism. His calculation is flawed, as I've mentioned before. And his new defense, same as the old defense, is this:
Economists generally agree that when you have a shock that occurs at the beginning of a calendar year, you should look at growth over the course of the year (say Q4 to Q4), not the average GDP in 2013 compared with the average GDP in 2012.

So Sumner wants to get into it, huh? Well anyone who looks at data generally agrees that choosing only two points (regardless of whether they are Q4-Q4 or Q1-Q1) to measure your effect vastly increases the influence of statistical fluctuations in the data, especially when looking at growth rates. Additionally, Q4 to Q4 measures from Q4 of 2012 to Q4 of 2013, including a full quarter of data from before the shock hit supposedly hit (at the beginning of CY 2013). But vague assertions of economists generally agreeing is good enough to outweigh a real econometric argument (the same one made by Mike Konczal, by the way).

Now maybe expectations of the sequester allow you to include GDP data from October of 2012 in your 2013 growth rate -- but if you recall, the political process had gone up to these fiscal cliffs before and averted them at the last minute. Only 50% of the sequester could have been already priced in (assuming it had a 50-50 chance of happening; your percentages will vary).

As an aside, if people generally believed in the Keynesian model, wouldn't expectations of a fiscal cliff hurt regardless of whether the "concrete steppes" of actual cuts happened? And the more credible the fiscal authority was in making changes, the smaller the actual changes have to be, right? The market monetarist model depends not only on the model being right, but that the market thinks the model is right. If you rely on expectations without the need for concrete steps, then whatever model the market believes is the actual economic model.

But the biggest problem with Sumner's argument is that the biggest percentage cuts from the sequester came in FY 2014, not FY 2013 [1]:

Now FY 2014 is CY 2013 Q3 to 2014 Q3, which means that the big negative growth shock we saw in Q1 of CY 2014 could be attributed to the sequester.

The additional kicker is that the cuts were only specified at a top line level. That means that affected government agencies had to figure out how to allocate the cuts (and get approval from their leadership) -- the result was that many of the cuts did not go into effect until the end of 2013 (Q3). Cuts also couldn't be made to contracts for work already delivered, so by the time the agencies figured out how to allocate the cuts, pretty much all of it had to be in the future relative to when the plan was agreed on. That means we'd be looking for a shock somewhere inside the yellow box in this diagram:

A big shock somewhere inside the yellow box. Hmm.

The red line is the smoothed version of the "first order Keynesian model". The original model discussion is here. Sumner's faulty calculation is shown as the orange line. Like I said before, it goes from a negative fluctuation to a positive fluctuation making it maximally misleading.

What is interesting is that this calculation reinforces the information transfer model in several different ways:

  • The shock occurs not when the cuts were announced, but around the time they actually happened ("concrete steppes"). This invalidates an expectations view of NGDP. Under that model, the cuts should have had impact as soon as they were expected.
  • The shock is way too large to be just due to the concrete steps. It probably represents a panic among government contractors and the associated non-ideal information transfer.
  • You could never see this shock without a model that includes the capability to decompose monetary and fiscal shocks. It's hard to see in the data (it's still there) without a model for the path of inflation and NGDP in the absence of shocks.

As a final note, I'm not sure why Sumner is focusing on this particular episode other than ego (Krugman said it was going to be a test of market monetarism). This one data point is among hundreds for different countries. Sumner does cite approvingly of a purported "takedown" of Krugman's austerity graph, but that "takedown" assumes the market monetarist model in order to throw out data (basically, all the liquidity trap countries) that make up the bulk of the correlation. Even in the information transfer model, if I throw out most of the liquidity trap countries the correlation between expansionary fiscal policy and increased growth disappears. That's true of Paul Krugman's model as well! If a country's not in a liquidity trap, monetary policy will offset fiscal changes.

It's a pattern of selecting the data to get the result you want. Now that Sumner is on his way to Mercatus, I can only assume it will get worse. We'll start seeing things like this (from my old blog). At least he didn't join Freakonomics.


[1] FY is US Government Fiscal Year where FY 2014 is (the beginning of) CY 2013 Q3 to CY 2014 Q3, where CY is calendar year (or January to December, Q1 to Q3). I have to deal with this all the time at work because my company uses calendar years, but most of my work is with government contracts that go by fiscal years.

Friday, January 23, 2015

I'm not sure economists understand supply and demand

I had put off watching the new Economics 101 videos from "Marginal Revolution University" until I had a sufficient contiguous chunk of time available. My reaction to the first video is pretty much summed up by this blog post titled Why people hate economics, in one lesson. And I have some philosophical objections to starting with the axiom that incentives matter -- at least as a useful methodology for producing tractable models.

The first few videos after the introduction look at supply and demand curves and then look at equilibrium. Now I am completely behind the idea of supply and demand curves and the equilibrium price. I actually derived them from information theory here. But the logic behind the supply and demand curves MR University show -- to be honest it's not really any different from how they were taught in my high school economics class, or how they are explained on Wikipedia -- is way too dependent on the specific relationships of a particular good (they use oil). More oil is produced at a higher price because it becomes profitable for companies to look for and extract harder to find and extract oil. But this isn't true for software (additional units are have almost zero cost, so it gets cheaper because upfront development costs become lower per unit). And the fact that the demand curve slopes down seems to be more a result of inequality for some goods (more people would buy iPads if they could afford them) than marginal utility. And in real life, people get upset at Uber or 'price gougers' who raise their price when supply is scarce. This makes it seem brutal.

Yes, yes: barriers to entry, patents and upfront capital costs. However, my point isn't that the supply and demand curves are wrong because specific examples are wrong or seem heartless. As I said in the previous paragraph, I believe in supply and demand. I have an issue because the way MR University teaches assumes the marginal utility interpretation of supply and demand -- that "incentives matter", and specifically pecuniary incentives. More oil is produced because it is profitable. I buy more of something because it was cheaper.

This becomes more apparent in the equilibrium price video (linked above) where Alex Tabarrok tells us about an experiment by Vernon Smith and concludes that the supply and demand model works. However, the Smith experiment assumes the utility version of the supply and demand model! He hands out pieces of paper with different utilities (measured in money [1]) and different marginal unit costs of production, forcing the structure of supply and demand curves given in the models. It is a bit like saying all two-player games result in ties, then designing tic tac toe and saying it proves your assertion.

It is not clear that the result of the Smith experiment is anything other than an asynchronous poll [2] of the price on the student's cards -- and that has nothing to do with supply and demand [4]. 

Now I have my own interpretation here (and succinctly, supply and demand curves are models of the entropic forces that enforce information equilibrium), but you don't have to take my word for it. The information aggregation interpretation of markets [2] is a less restricted approach, though I have my disagreements with it. I'm sure there are other ways to teach this. It's an easy to use algorithm to try and solve a linear programming problem, for example. Or just posit supply and demand curves as an axiom themselves -- why do they even need a specific interpretation? It will help when you have different slopes for the supply and demand curves later in the class [3].


[1] In this setup, you assume the social welfare function of the market -- wealthy people are similar to utility monsters where it is better for you to give up your stuff for them to enjoy because they enjoy it much more than you.

[2] Markets are really strange as polls or information aggregation devices. It essentially says that if you are good at predicting things in one domain (say, business), you should be rewarded with money and hence a greater ability to try and predict things in unrelated domains (say, politics). If you make a lot of money in sporting goods, you have been granted the means to try your hand at farming or producing records. I don't really take being good at one thing as evidence you are good at something else.

[3] In the information transfer model, you do have the freedom to make different sign choices; it basically says that a negative change in a quantity is keeping the two quantities (supply and demand) in information equilibrium instead of a positive change. You could look at this as measuring the price of land in terms of the quantity of land left. The ordinary supply and demand model doesn't really allow you the freedom to look at the problem in this alternate way.

[4] It is nagging at me why Tabarrok doesn't get this. It's not like he doesn't have a PhD in economics or anything. Am I missing something? Or is it just motivated reasoning (from me or Tabarrok)? Tabarrok makes a point of saying that Smith thought it wouldn't work. I have no idea why Smith would think it wouldn't work and that's not just the curse of knowledge. Even if the prices announced in the auction were random, the result would be the same equilibrium price. The only two kinds of equilibria I can think of are ones that on average have the equilibrium price given by the cards and the ones where the market doesn't clear (not all of the cards are traded) that should generally have an observed price below the equilibrium price (according to the information transfer model).

Thursday, January 22, 2015

A Socratic dialog on information equilibrium in economics

info: Hi econ! I'm working on a new economic theory -- can I get some input?

econ: Why are physicists always drawn to economics? You probably just want to show off how good you are at math again. I'll bite. What's your theory, info?

info: Actually, the math is really simple. Simpler than a DSGE model, anyway. It's based on information theory, and ...

econ: Information theory? Economists already use information in their theories. There's a whole field of information economics ... it's even got its own Wikipedia page.

info: That's not exactly the same thing. The content of a piece of information matters in information economics, right?

econ: I don't follow. How does a piece of information differ from its content? Information is its content.

info: There is a difference between saying you have a 10 GB flash drive and saying you have a 10 GB image file of a kitten, right? The first is only a quantity of information and the second is a specific instance of that quantity of information.

econ: I see. But I still don't see how that makes your information theory so different from information in ... oh, let's say game theory.

info: That's a good place to see the difference! In a game with perfect information, like chess, the locations of the pieces matter ...

econ: Of course the locations matter. It wouldn't be chess if they didn't.

info: Well, in the information theory I'm talking about, they don't. Actually, it's more like they might matter, but we don't really care. Anyway ... the information in the chess board can be encoded in a number less than 13^64 (six possible pieces of each color, plus no piece, in 64 squares) that would take about 237 bits of storage. That even includes impossible positions, such as a rook on every square.

econ: How could the impossible positions even matter?

info: That's a good question, but the only reason we know that some positions are impossible is that we know the microfoundations of chess -- the rules of the game. We don't really know the microfoundations of macroeconomics, yet.

econ: So you're not making any assumptions about the microfoundations of chess, you're just looking at it as 237 bits going back and forth between players?

info: Exactly! That's information equilibrium. The information received is the information sent. That's a great analogy ... I'll have to remember that one.

econ: You're welcome.

info: There's also something detecting those 237 bits going back and forth ...

econ: The win-loss ratio?

info: You're good! It would be the win-loss ratio of one of the players, though. We'd probably want to take a logarithm, too.

econ: Naturally.

info: If two players are basically equal, or basically making random moves, you'd expect a win-loss ratio of about 50%. If it changes, one player is probably better than the other.

econ: But what does the 237 bits have to do with this?

info: Well, we'll need to consider if the chess board gets bigger or smaller for economic growth.  And if there is a shade obstructing half the view of the board, only 118 bits are getting through. You'd expect a good player's win-loss ratio to fall under those conditions.

econ: What is that supposed to be, a recession?

info: I am not completely sure at this point. I'm still working on the theory.

econ: I'm just looking for the connection to economics.

info: Ok, ok. You can make the analogy of one player winning more than usual as a positive shift in the demand curve, raising the win-loss ratio. That's the price. It could also be a negative shift in the supply curve.

econ: We economists usually say the curves shift right or left, or up or down ... if we even use supply and demand curves at all outside of undergraduate economics 101.

info: So you'd probably not be impressed that information equilibrium leads to supply and demand curves?

econ: Not really.

info: Or the IS-LM model?

econ: Your ad hoc theory lets you come up with ad hoc models? Pass.

info: I've made some predictions of the future path of inflation.

econ: Economics isn't really about predictions. We're more like doctors than The Weather Channel. When something goes wrong with the economy, we tell you how to fix it.

info: Well, the theory allows you see the different effects of fiscal and monetary policy ...

econ: Well that's interesting. And relevant! Let's hear it.

info: Remember the 237 bits in the chess game? Well let's say NGDP is an encoding of a given economic scenario -- a chess board position --- with dollars being something like bits.

econ: Ok.

info: Now lets posit that NGDP is in information equilibrium with another number encoding the same economic scenario. Let's use the money supply.

econ: Which money supply?

info: It doesn't really matter right now.

econ: I assure you, it does matter.

info: We'll figure that out empirically later. Just hear me out for now.

econ: All right.

info: And the price that is detecting the information moving around, keeping both numbers in equilibrium like the win-loss ratio, is the price level.

econ: PCE or CPI? Core or headline?

info: [Sigh] ...

econ: What?

info: They're barely different from each other!

econ: Isn't there a joke about a physicist and a spherical chicken?

info: Anyway, when you put these pieces together you ...

econ: ... get the quantity theory of money. Yes, yes. But that's like a hundred years old and has pretty much been discredited by empirical evidence ... unless you add inflation expectations terms, I guess.

info: But it's not exactly the quantity theory of money. That's just the high inflation limit. As the economy grows, inflation tends to fall and that eventually leads to something that looks like a liquidity trap.

econ: Why?

info: The reason seems to be that as an economy grows, a typical dollar is more likely to be used in a transaction in a low-growth market.

econ: But why is that?

info: Of all the possible economies you can have with an given NGDP, most have a bunch of low growth markets and a few high growth markets ... and that ratio gets bigger as the economy gets bigger.

econ: You still haven't explained why.

info: If you don't make any assumptions about how the economy works, that is just the most likely configuration. There are more ways an economy with a given NGDP can have a bunch of low growth markets than a bunch of high growth markets. It's like there are more ways you can send out a given amount of energy with a bunch of low energy photons than with a few high energy photons. It's a maximum entropy argument. Information entropy. There is an entropic force preventing high inflation that gets stronger with the size of the economy.

econ: That doesn't explain anything. You need to tell a story ... how do the incentives change for firms to raise their prices less when the economy is bigger?

info: You want me to tell you the Calvo fairy gets tired over time? Or can't get around to all of the firms as fast as it used to when the economy was smaller?

econ: At least that's a start.

info: That would be like adding a Goldilocks force to an atomic model that makes the atom move to where the density is not too high or not too low in order to explain diffusion. It's not only unnecessary ... it's actually wrong. Like Calvo pricing. Firms change their prices all the time. Menu costs are trivial.

econ: The Calvo fairy is just an example of a micoeconomic model assumption to get effects that are observed in empirical macro data into the model.

info: Then why don't you put your models up against empirical data?

econ: The data rejects too many good models!

info: What?!

econ: That's a bit of a joke. But there are lots of successful real-world tests of economic models. Auctions use economic theories to produce better outcomes. Did you hear about that prediction of how many people would ride a new BART line in San Francisco?

info: I have. In fact, I believe it used a random utility model that says the utility someone derives from making a choice has a deterministic component plus a random component.

econ: That's the one.

info: How is the random component different from what I am talking about? In a random utility model, people will not always make the best chess move they can think of that we can sometimes predict, but will also make random chess moves that we can't predict.

econ: It's still based on individuals making choices -- we just allow that there is a component we can't observe.

info: In that case the number of choices and the distribution those unobserved choices are drawn from -- the information theory -- become your only macroscopic constraints. Your deterministic piece is like the cell phone metadata and the random piece is like the content of a text message. That is exactly what information theory was designed for!

econ: Macroscopic constraints?

info: Sorry -- physics jargon. Empirically observed values of aggregate data. Like NGDP.

econ: I'm still not convinced. And it seems like you've come up with a quantity theory of money that can have a liquidity trap. This won't go well. I'll show you why. Let me call up a couple of friends. [dialing ... ] Hello? Hey, Paul, Scott, listen to this.

Paul: What's up?

Scott: Go ahead.

econ: You probably had no idea you'd be on the same telecon today. Anyway, so my friend info here has this new economic theory that's basically the quantity theory of money with a liquidity trap. Thoughts? You first, Paul.

Paul: Mainstream macroeconomics already has pretty good models of the liquidity trap. And the IS-LM model is a simple way to try and explain it without going all intertemporal. We don't really need new models unless they tell us something new.

econ: What's your take Scott?

Scott: A liquidity trap is a result of incompetent monetary policy -- if the theory really is a quantity theory in the spirit of Friedman, then why can't the central bank just create expectations of inflation or NGDP growth? A quantity theory liquidity trap is a bit of an oxymoron. Just create expectations of a permanent increase the quantity of base money!

econ: Ok, thanks, guys!

Paul: No problem.

Scott: Talk to you later.

econ: See, info?

info: You're right. I have my work cut out for me. The people who would like the liquidity trap in my model think their models are working just fine, and the people who would like a quantity theory don't believe in a structural liquidity trap.

econ: In economics, it's really hard to tell who is right because the data is uninformative. We mostly try to come up with a set of assumptions we like the most and see what we can derive from that.

info: Isn't there a joke about an economist assuming a can opener? Wait ... did you say "like the most"?!?

econ: Yes. And we as a profession really like the assumption that at the root of all economics is a complex ocean of human decisions and expectations. What the representative agent thinks determines the course of the economy.

info: Isn't there a contradiction between the complexity of human decisions and a story with a representative agent?

econ: Nope.

info: That doesn't sound very scientific. And the statement that macro data is uninformative is dependent of the assumed complexity of your models. If you think the models have a lot of dimensions, like millions of agents or an infinite number of expected paths of NGDP consistent with current conditions, then the data is uninformative. If you think RGDP is an exponential curve with a constant slope on a log chart, then the macro data is completely informative!

econ: But of course the macroeconomy is complicated! People are complicated and the economy is made of people!

info: An oxygen molecule is a really complicated diatomic system of electrons and quarks confined in baryons held together by meson fields, but an ideal gas is really simple. All the details of quantum mechanics, Yang-Mills theories with mass gaps and SU(3)xSU(2)xU(1) symmetry come down to a single number.

econ: But the economy's really complex!

info: It's about 260 J/kg K ...

econ: I said complex!

info: I guess I'll just keep writing on my blog ...

econ: Good luck with your theory! Remember, think: "complex"!

info: More like intractable.

econ: What was that?

info: Nothing.


Some of the dialog is based on actual questions from, quotes by or blog posts by Noah Smith, Chris House, Robert Lucas, Scott "Scott" Sumner, Paul "Paul" Krugman, Karthik Athreya, Nick Rowe, Simon Wren-Lewis and Robin Hanson. But many are my own lowly attempts at snark and humor.

An alternate title was "A dialogue concerning two high-GRE disciplines", but I thought that was a bit much.

The QE; it does nothing!

The ECB has announced about a trillion Euros worth of QE. I have taken this into account and made no change to my inflation predictions through the rest of 2015 since the monetary base is irrelevant to inflation:

Short term interest rates will fall, though! I'll try to have a graph of that up soon ...

Also, the title reference:

Wednesday, January 21, 2015

Scottish enlightenment photoblogging

During my trip to Scotland, I took a couple of economically-relevant pictures in Edinburgh. I did manage to find a good place in the archives for my picture of David Hume's grave

I haven't found a good place to use it yet, but I also got one of this memorial to Adam Smith (I walked by it, but did not realize at the time that his grave was in Canongate Kirkyard a few blocks down the Royal Mile):

Not so much for economics, but as part of the greater Scottish Enlightenment, here's mathematician Colin Maclaurin's memorial (at the upper right on the wall):

Not in Scotland, but while I'm at it, I got this one of Joseph Fourier's grave in Paris a couple years back:

Gold was irrelevant

Krugman writes this in the course of a post about the Swiss National Bank exchange rate peg:
The trouble is that regime change is hard to engineer. FDR did it by taking America off the gold standard, but going off gold isn’t something you get to do very often.
Was the switch away from the gold standard really the monetary regime change that got us out of the liquidity trap conditions of the Great Depression?

FDR takes the US off the gold standard in 1933 [corrected, H/T srin]. Bretton-Woods (1944-1945) comes a couple years after the onset of the Federal Reserve pegging (via an agreement with the Treasury in 1942) short term interest rates at 0.375% (3/8) and 'implicitly' capping long term rates at 2.5%. This policy regime lasts until the Treasury-Federal Reserve Accord in 1951. The last vestige of gold convertibility falls away in 1971. Those events basically describe the potential moments of monetary policy regime change we have available.

You can see the effect of the interest rate peg in the short term interest rate data:

If we look at the three solutions to the information transfer model equations that are required to cover the US price level from the 1920s to the present day, our current solution (blue) doesn't take over until the late 1950s. FDR takes the US off the gold standard in the 1930s, however, the liquidity trap solution (red) continues to be in effect until after 1940. The hyperinflation solution (purple) is in effect from the early 1940s to the 1950s.

If we compare this to the list of historical events, the monetary policy regime change appears to have coincided with the lack of independence of the Federal Reserve in the 1940s ... and is not related to gold at all.

The big events in the gold standard, 1933 [corrected, H/T srin] (FDR leaves gold), 1944 (Bretton-Woods) and 1971 (terminating gold convertibility) all happen in the middle of these solution branches.

This analysis lends itself to the policy conclusion that the BoJ, SNB, ECB and Fed (and anyone else) should abandon central bank independence and adopt an interest rate peg (for long and short rates) to escape the current liquidity trap.

Is this the market monetarist model?

Scott Sumner wrote down a "model"

\text{(1) } H_{t+1} - H^{n}_{t} = \alpha (NGDP_{t} - NGDP^{T}_{t})

\text{(2) } NGDP_{t} = NGDP^{F}_{t-1} + e_{t}

\text{(3) } NGDP^{F}_{t-1} =NGDP^{T}_{t} + SE_{t-1}

Where $H$ is the number of hours worked, $H^{n}$ is the "natural rate" of hours worked, the superscript $T$ means the central bank's target, $F$ is the futures market and $SE$ is a 'systematic error' (from measuring the difference between the futures market NGDP and the central bank's target NGDP).

This "model" does nothing except assert Scott Sumner's view of economics. As such, it carries no real weight. It's a bit like saying a translation of Sumner's blog into Italian is an Italian model. Che fa?! According to equation (2), NGDP in period $t$ is the futures market NGDP in period $t-1$ (an NGDP futures market forecasts NGDP as well as it can possibly be forecast). Or, if the central bank is excellent at forecasting NGDP, then the "systematic error" $SE$ is zero (or just random, like $e_{t}$) and if it isn't, it's not. That's really the entire content of equation (3).

Besides, it isn't clear that futures markets actually work.

However, equation (1) is interesting, because, well, it's almost an information transformation model. If we look at the model $P: NGDP \rightarrow H$, we can write (like the labor market model here)

P = \alpha \frac{NGDP}{H}

The best fit (using CPI inflation, all items) give us an ok model of the price level and a fairly good model of the inflation rate (it is systematically high):

The key to understanding this is that we can rewrite the previous equation and take to derivative to obtain another version of Okun's law

H = \alpha \frac{NGDP}{P}

\frac{d}{dt} \log H = \frac{d}{dt} \log \frac{NGDP}{P} + \frac{d}{dt} \log \alpha

\frac{d}{dt} \log H = \frac{d}{dt} \log RGDP

According to Okun's law, RGDP growth is correlated with the change in the employment rate (anticorrelated with the change in the unemployment rate), along with anything else that roughly measures the quantity of employment (like total hours worked).

As a log-linearized model, the information transfer version of Sumner's equation (1) is

h_{t} = n_{t} - \pi_{t} + c_{h}

Which means that the natural rate of hours worked and the central bank target NGDP in Sumner's model must basically combine to give you the price level:

H^{n} - \alpha NGDP^{T} \sim P

What gives the information transfer model version  some weight relative to Sumner's equation (1) is that it follows from information equilibrium -- information in the signals from the aggregate demand are received by labor hours (and the price level detects the information transfer). Also, Sumner's model doesn't tell us the relationship between the his central bank NGDP target and the natural rate of hours worked (I am unsure if he allows them to be related as one is a fundamental property of the economy and the other is chosen by the central bank).

Sumner has complained that I don't have a story behind my theory, however he doesn't seem to have any theory behind his story.

Tuesday, January 20, 2015

Is the market intelligent?

On display at the Tate Modern; it's part of a work by Alexander Brodsky and Ilya Utkin.

Henry at put up an ad for a conference called Collective Intelligence 2015, where one of the topics is "The intelligence of markets and democracies". In light of my previous post, I got to thinking ... are markets actually intelligent?

One of my favorite blog posts of all time is another post from by Cosma Shalizi that addresses this question a bit. Well, it says that trying to explicitly solve the linear programming allocation problem that we use the market to solve is actually impossible given a not-heroic dose of computational resources. Which means we really can't check if the market is doing a good job or not. As Shalizi puts it:
It means that [the market mechanism] faces no competition, nor even any plausible threat of competition.
Well, then.

Does the information transfer model shed any light on this?

What follows are essentially notes for an argument against the idea that we should presume markets can uncover information (are 'intelligent') ... or another way, solve an information aggregation problem. In fact, we should not prima facie trust options derivatives or prediction markets.

Prediction markets and options derivatives are considered to be beneficial tools because they allow information to be aggregated and reward correct information and punish incorrect information (both monetarily). If I thought Mitt Romney was going to win the 2012 US presidential election, I could have bought a prediction market contract (at somewhere like Intrade) that would pay out if he won. Since that was an incorrect prediction, I'd lose money while those that thought Romney would lose (the sellers of the contract) would earn money [1].

Our only pre-requisites for our analysis are the general information transfer model, and the derivation of supply and demand curves (for ideal and non-ideal information transfer) ... just the first couple of posts on this blog. Here's a quick recap ...

Ideal information transfer from demand to supply (or the future to the present) follows from information equilibrium where the information received by the supply is equal to the information transmitted by the demand, or symbolically I(S) = I(D). Solving the differential equations that result from allowing each side to vary infinitesimally and holding I(D) or I(S) constant results in supply and demand curves that intersect at the equilibrium price.

In general, however, we have I(S) ≤ I(D) since there can be information lost in the transfer (non-ideal information transfer). Additionally, we know the information received by the supply cannot be greater than the information sent by the demand (you can't get more information out of a message than is present in the message). We can use Gronwall's inequality for a differential equation to show the supply and demand curves at information equilibrium no longer intersect at the equilibrium price, but rather represent an upper bound on the price.

Now, let's look at a price movement from A to B (and B to A) in the following diagrams under conditions of ideal and non-ideal information transfer:

Under ideal information transfer (information equilibrium) in the first graph, a price movement from A to B or B to A represents movement of a demand curve. There is an analogous diagram for supply curve movements. If I bought an option that sells at price B in 6 months (or predicted the price B was too high in a prediction market), I would be rewarded if the price fell to A. Likewise if I bought an option to buy at A in 6 months, and the price rose to B, I would also make money. I make money if I correctly guess the actual movement of supply and demand curves [2].

This is the idea behind the incentive structure of prediction markets and options markets that purports to solve the information aggregation problem. Knowledge of factors of how supply and demand will change will lead to profits and being incorrect will lead to losses. Ideal information transfer represents a one-way intelligent box where good information is kept in and bad information is thrown out over time.

You can probably see where I am going with this already.

In the second graph, the price can fall anywhere in the lower shaded area bounded by the red and blue supply and demand curves. The movements from A to B and back are not based on movements of supply and demand curves and in fact movement from B to A represents an additional loss of information (it is farther below the 'ideal' price at the intersection of the red and blue curves). But! Correctly guessing these price movements is still rewarded -- and what is key to this argument is that correctly guessing the move from B to A is rewarded, which represents a fall further below the ideal price. Our box now keeps some bad information and throws out some good information.

Additionally, since the points A and B are exactly the same in the two graphs above, there is no telling which situation you are in and therefore which options/predictions are the faulty trades ruining your information aggregation.

If the series of good and bad trades are random, eventually the price p → 0 and the market collapses. But you don't need to go that far -- since both good and bad information can be rewarded, there is no point when you can trust the movements of the market price p.

Now maybe you can assure yourself that you really do have ideal information transfer, and you don't need to worry. But then I show you this graph:

Correctly predicting the price movement from A to B, from an ideal market price to a non-ideal market price, can be rewarded (and the prediction that the market stays ideal at A can be punished). Even an ideal market can transition to a non-ideal state and you can make money in a case where information is lost.

Now you might be saying: how do markets work at all under these conditions? We have to be careful and not let the two functions of markets we are describing get entangled.

In prediction markets and options markets, you are trying to use the price mechanism to solve an information aggregation problem. Another way to do this is via polling [3]. The mechanism fails to solve the information aggregation problem because good information leaks out and bad information sneaks in as described above if your market is not (or does not stay) ideal.

In traditional markets for goods and services, you are trying to use the price mechanism to solve an allocation problem. Another way to do this is via rationing. The allocation problem tends to be solved sufficiently by maximum entropy distributions (information equilibrium) for large enough markets -- i.e. supply and demand as viewed in the information transfer model.

The information transfer model helps you see this distinction between information aggregation and allocation because we don't care about the content of the information being transferred from I(D) to I(S). As I've said before, transferring false information in an error-free way is considered more of a success that transferring true information with a couple of errors [4]. When we talk about supply and demand, the market is solving an allocation problem, not an information problem.

Hypothesis: options work best (solve the information problem) when the price mechanism for the underlying commodity or security is solving the allocation problem with I(D) ≈ I(S) because the allocation problem anchors the information problem -- i.e. prevents I(S) << I(D).

I haven't proven this more nuanced hypothesis here (I haven't rigorously proven the original assertion either, but my intuition says that both likely true). However, it would mean that since e.g. an NGDP futures market isn't solving an allocation problem (the market allocates contracts invented in order for the prediction market to exist, as opposed to e.g. pork belly futures which allocate bacon for people to eat), it would be subject to information-less booms and busts with p → 0 eventually.

To answer the question in the title, though, it seems the market isn't intelligent as it can't in general solve the information aggregation problem. But it seems it can solve the allocation problem -- however we can't check if that is the best or even correct answer given an objective function because of the computational difficulties noted by Shalizi at the top of this post.


[1] The same goes for 2016.

[2] Yes, prices rise on an increase in demand or a fall in supply and it is hard to tell the difference. However in both cases there is a real movement of (or along) the supply and/or demand curves and that is what is important here.

[3] Polling suffers many problems of its own. I think it is useful to consider that the belief that you can set up a prediction market and have it start producing useful information is a bit like the belief you can sample the population and expect it to come back with the right result. In the polling case, the analogous problems to the ones I describe here are things like sample bias and question design.

[4] You should think of this more as an analogy using Shannon's channel capacity theorem than as a model for what is happening ... but more concretely the false statement and the true statement would be something like this:
  • "Inflation is going to shoot up to 10%" = false, without errors
  • "Inflation is go to st3y at or below 2%" = true, with errors