Thursday, July 31, 2014

I do not think that calculation means what you think it means

It took me a minute to figure out exactly how Sumner came up with this result. I assume that Sumner meant 2013 Q4 as the ending point of his second period starting at 2014 Q4, so that wasn't the problem. The problem is that Sumner calculated the NGDP growth at some point between 2011 Q4 and 2012 Q4 and at another point between 2012 Q4 and 2013 Q4, not the average growth during those two period (or any other meaningful number).

See, Sumner took the end points of the time periods and calculated the % change between them. I show more digits to give some evidence that I did the same calculation Sumner did:

2011 Q4 to 2012 Q4: 3.46652% (Sumner rounds to 3.47%)
2012 Q4 to 2013 Q4: 4.56636% (Sumner rounds to 4.57%)

According to the mean value theorem, taking the slope between the endpoints of a curve only means the slope of some presumably continuous curve achieved that value somewhere in between those end points. It is not the average growth for the period (or really anything at all).

Of course, this isn't the way you should do this. NGDP is a number with some measurement error, so relying on only the two end points means that you massively increase the error in the derived quantity.

What is the (omitted) error on Sumner's measurement? I assumed a log-linear model of GDP from 2009 Q4 to 2014 Q2 and calculated the error bands. The result is plotted here:

If we use these error bars to predict the error on Sumner's measurement, we get

2011 Q4 to 2012 Q4: 3.8 ± 1.6%
2012 Q4 to 2013 Q4: 3.8 ± 1.6%

That isn't a typo. The two periods have exactly the same growth rate. The difference between the two periods is entirely measurement error.

We can try a different method: average the growth rate numbers. The period starting from 2009 Q4 also has an average growth rate of 3.8% by this method, however the error is larger at ± 1.9%. Here is a graph:

If you look at this last graph, you can start to see how misleading the numbers in Sumner's post are. I see almost constant growth with random fluctuations around it. If you average these numbers for Sumner's two periods, you end up with another inconclusive result:

2011 Q4 to 2012 Q4: 3.7 ± 1.3%
2012 Q4 to 2013 Q4: 3.9 ± 1.7%

There was no failed experiment and market monetarism didn't pass any test. It's all inconclusive.

PS Sumner's commenter Kailer Mullet is incorrect: the single decimal precision (nearest 0.1%) is exactly how much precision is warranted by these numbers. Adding another decimal place is pure noise and taking one away loses information.

Tuesday, July 29, 2014

On travel again

I'm out in the middle of nowhere for the next two weeks, so blogging will be bursty as I have little else to do in the evenings, but will be busy at the real job all day. I'm working on the piece about the history of expectations and human behavior in economics hinted at in comments on this post.

Monday, July 28, 2014

Chinese statistics seem just fine

I got a request to run the model for China in an email from Jonathan Prince, so here it is. I half expected to get nonsensical results; the "conventional wisdom" in the US is that Chinese statistics are suspect (see here for example). However, after running the model using data from FRED (derived from OECD and IMF data), it turns out that aside from potentially understating inflation in the late 90s/early 2000s, the results for China are largely in line with other countries. Here are the NGDP-M0 and P-M0 plots I've been showing lately with China added:

Here is the actual model fit with parameters (everything below shows model calculations in blue and data in green):

You can see the understatement of inflation in 1999-2000. The information transfer index is high, but constant, meaning that the Chinese economy roughly follows something like the quantity theory of money (however the rate of change of the price level is smaller than the rate of change in the money supply, i.e. log P ~ k log M0 with k < 1):

Due to the seasonal effects and the lack of quarterly NGDP data or "core" CPI data, the year over year inflation is a bit noisy, but the model seems to give us something that looks like "core" CPI:

Again, inflation seems to be understated in 1999-2000. Overall, China seems like a pretty typical high-growth large economy, like the US in the 1960s.

Update 14 January 2017

There might be some issues with the unemployment rate data, however.

Thursday, July 24, 2014

Beware implicit modeling

One of the difficulties you have when you are steeped in a subject for a long time is that you forget when you are making implicit modeling assumptions. Nick Rowe claims that the accounting identity Y = C + I + G + NX is useless, in opposition to the "first-order Keynesian" view that if government spending G → G + δG during a recession we will get real output Y → Y + δG. He's making a claim for the null hypothesis, but it's really hard to say which is a less informative prior. Does a signal from G make it to Y or does it get absorbed by C, I and NX?

I probably didn't make it exactly as clear as I would have liked in my comment on the page, but the idea was that the identity is useless is as much a modeling assumption as the first-order Keynesian view. If G → G + δG, then

Y → C + (∂C/∂G) δG  + I + (∂I/∂G) δG + G + (∂G/∂G) δG + NX + (∂NX/∂G) δG

= Y + δG + (∂C/∂G) δG  + (∂I/∂G) δG + (∂NX/∂G) δG

The "first-order Keynesian" view assumes that (during a recession)

|∂C/∂G| , |∂I/∂G| , |∂NX/∂G| << 1

The "useless accounting identity" view assumes that

|∂C/∂G| , |∂I/∂G| , |∂NX/∂G| ~ 1

The only way you can't know what happens if G → G + δG is if it is possible for some offsetting effect or some amplifying effect of equal magnitude that makes δY < δG or δY > δG, respectively. Note that these are structurally similar assumptions about the dependence of the other variables on changes in G.

Nick also says that the first-order Keynesian view could be used to say that because Y = C + S + T, we could raise taxes (T) to get more real output. However, that is not what that equation states; it states that increasing tax revenue [1] would lead to more real output. Does raising taxes increase tax revenue in a recession? That becomes a modeling assumption. "Raising taxes" is analogous not to increasing government spending but rather to e.g. increasing the number of fixed-price RFP's the government puts out. While increasing the number of fixed price RFP's could lead to more businesses submitting bids and an increase in government outlays so that G → G + δG, it may be such that no business considers any of the potential contracts to be a good deal.

The first-order Keynesian assumes in this case is that:

|∂C/∂T| , |∂S/∂T| ~ 1

While the Rowe reductio ad absurdum assumes

|∂C/∂T| , |∂S/∂T| << 1

Again, these are structurally similar assumptions.

Rowe believes it is "warped" not to assume the same general dependence of C, I and NX on G as you do for C and S on T.

Update 7/24, 9pm PDT: I think I'd like to make this a little stronger. Rowe's claim is that e.g. consumption C depends to first order on government spending G, i.e.

C = a + b G + ...

with b ~ 1. [2] (This could also apply to I and/or NX, or all three.)

For taxes T, C ~ a + b T makes sense: my personal consumption is basically C = a - S - T (consumption is what is left over after savings and taxes). But for government spending? I'm pretty sure when the stimulus passed, I didn't change my behavior. Maybe I "expected" it to pass and priced it in already.

Another way of putting this is that Rowe is saying the basket of goods comprising C isn't actually at a local maximum or minimum with respect to the given level of government spending. Maybe that is true, but then, that's a model assumption. Consumption isn't utility, but if you take consumption to be proportional to utility, then Rowe is assuming that utility isn't maximized at a given fixed level of government spending. It's still implicit modeling whatever you call it.

[1] More tax revenue could mean we have more output (hence causality went the other way), or that the market took raising taxes as a sign that the recession is over, creating expectations of an improved economy. All kinds of theories could be at work.

[2] It is possible that C = a + c G^2 + ... with an unnatural coefficient c >> 1, but that is an unnatural assumption.

Wednesday, July 23, 2014

New preprint on information transfer

Peter Fielitz and Guenter Borchardt have a new version of their paper on the information transfer model up on the arXiv. They updated it with a citation of this blog, which is quite an honor since citing blogs is a somewhat non-standard practice. The paper has an interesting discussion relevant to economics in Appendix A on the question of how you transfer information with a 1-symbol system where log 1 = 0.

Tuesday, July 22, 2014

Is monetary policy best?

Simon Wren-Lewis is annoyed with market monetarists who are annoyed with Paul Krugman who said that they have no political home in the US. Putting my two cents in, Krugman is right. My family consists of pretty stereotypical conservatives, and about the only thing that would make them say that "printing money" to improve the economy is a good thing is to say Obama is against it or that we'll put Reagan's face on it.

But Wren-Lewis reiterated the idea that today's "Keynesians" and market monetarists agree that monetary policy is the best macro stabilization policy when you're not at the zero lower bound.

Is it?

The more I look into this, the less the monetarist program for macroeconomic stabilization makes sense.
Recessions appear to be discrete shocks that are on top of a long run trend

In the information transfer model, the long run trend runs roughshod right past the shock. In this picture, monetary stabilization consists of temporarily altering the trend to offset a temporary shock and then returning to trend. It's a bit like sailing a ship, getting hit by a gust of wind, and lashing the helm to new course (monetary policy) [1] instead of just turning the wheel temporarily (fiscal policy).

However, in this picture, the economy is already deviating from the trend. What makes any macroeconomist think moving the trend will mean that the deviation will come along with it? Sure, it is not entirely implausible -- in turning the ship to counter a gust of wind, there is no particular reason to believe that the wind will get stronger or weaker in response. But this means you really need a theory of the trends and the shocks [2].

Employment recoveries in recessions seem unaffected by any kind of policy

The employment recovery from a recession has a surprising regularity across many decades (see here for the US and here for Australia). The data almost pose the question themselves: Does any kind of policy actually do anything macroeconomically relevant for unemployment? People do not seem to get hired back faster if monetary policy is "loose" (1960s and 1970s) or it is "tight" (2008). Nor do they seem to get hired back faster if there is significant fiscal stimulus (2008) or not (1991).

So if neither monetary policy nor fiscal policy help speed up the fall in unemployment, then government intervention should not be assessed through macroeconomic relationships with unemployment, but rather assessed via the direct impact of the intervention.

The simplest version of direct impact is something like the Depression-era WPA: the government directly hires people. Additional measures where the government contracts with construction companies to dig holes and fill them up again also fits this bill (preventing layoffs and encouraging hiring).

However, whether you pay for this by borrowing or by printing money does not seem to have a macroeconomic impact on the decline of the unemployment rate. The key point is that the macroeconomic rationales for not using fiscal policy are therefore pretty irrelevant. Another way to put this is that the purported negative consequences of fiscal policy won't make the unemployment rate fall any slower. Crowding out? So what? That won't cause unemployment to fall any slower. Monetary offset? So what? That won't cause unemployment to fall any slower.

Yes, fiscal policy may not make the unemployment rate fall faster, but at least it helps the people that are laid off put food on the table or the people that aren't laid off keep their jobs. And maybe you can fix some of the roads while you're at it. Monetary policy doesn't help people directly -- unless you print money and give it to people.
Note:  I am not saying fiscal or monetary policy don't have an effect on the initial rise in unemployment. My intuition (guess) is that the initial rise in unemployment from the natural rate at the onset of a recession is likely entirely due to human behavior (anxiety/panic), therefore bold assertions from the government couched in the dominant economic paradigm at the time easily calm people down and arrest the rise in unemployment. (This would go part way towards explaining why there are no mini-recessions using something like the "recognition" mechanism Sumner describes.)

Monetary policy sometimes doesn't even work on the things it can affect

In the information transfer model, the impact of monetary policy becomes muted as the size of the economy and monetary base grow. Returning to the ship analogy, the effect of the helm becomes less relevant as the current grows. At that point, directly affecting the current -- e.g. the G in C + I + G + (X-M) -- becomes more relevant.

When it does work, monetary policy works by kind of a dirty trick

Although not the entirety of the argument in favor of monetary macroeconomic stabilization, the mechanism by which it operates is to use inflation to make workers accept a real wage cut while not taking a nominal wage cut (also could be applied to firms or households). Because of money illusion, humans focus on the nominal values so don't notice their real income is falling.

In the information transfer model, there is significant RGDP growth that is caused by expansion of the medium of exchange when the monetary base is small [3], so this is not necessarily true of the information transfer model. However, monetary policy advocates aren't advocating the information transfer model.
If you put these together, you get that monetary policy is a dirty trick that doesn't always work, doesn't seem to help unemployment, asks us to change the trend in response to a temporary event and requires us to swallow a bunch of theory that hasn't been empirically tested. Where do I sign up?

[1] Of course, the real idea behind the market monetarism is that you can just tell everyone on the ship you're still headed for Boston -- you don't necessarily have to turn the wheel -- and you'll eventually get there.

[2] In the market monetarist view, expectations are important. If recessions are temporary shocks then any monetary policy compensating for it will also be perceived as temporary -- and hence not work. The only way this works is if either recessions never happen so monetary policy never has to change in response to it, or if people are tricked into thinking the policy is permanent (see also the last bolded point).

[3] I actually think this is a major issue for monetary economics. How do economies get started? Well, if you create a monetary system you get an economy -- that is RGDP growth. At some point a doubling of the monetary base may just lead to inflation (according to long run neutrality), but when the base is small, a doubling of the base should have some real effect by allowing more transactions to occur.

Sunday, July 20, 2014

Rationality is beside the point

Brad DeLong has a post wherein he poses the question
Given that people aren't rational Bayesian expected utility-theory decision makers, what do economists think that they are doing modeling markets as if they are populated by agents who are?
intimating that maybe some ideal modeling scheme exists where you just need to replace the "rational Bayesian" with "behavioral". Along with most economists out there with objections to rational expectations, it seems most econoblog commenters are objecting to the "rational", rather than e.g. me who objects to the "expectations".

DeLong then quotes Andrew Gelman on how nonlinear utility functions used in economics are suspect and how students believe each step of a utility argument, but are "unhappy with the conclusion". The students seem confused by their own reasoning. Noah Smith has a post from earlier this year on other problems with utility functions. Cosma Shalizi sums it up pretty well (emphasis mine):

The foundation on which the neo-classical framework is raised, though, is an idea about rational agents: rationality means maximizing expected utility, where expectations come from maintaining a coherent subjective probability distribution, updated through Bayes's rule; moreover, the utility function is strictly self-regarding. This is a very well-specified idea, readily formalized in clean and elegant mathematics. Moreover, there's pretty much only one way to formalize it, which makes the mathematical modeler's life much easier. All of this appeals to certain temperaments, mine very much included. Alas, experimental psychology, and still more experimental economics, amply demonstrate that empirically it's just wrong.
All true.

Yet markets seem to work.

This is actually pretty remarkable. We're totally irrational potentially hyperbolic discounters subject to framing effects ... yet markets seem to work.

But! This is only pretty remarkable if you see markets as some kind of system that efficiently allocates resources. If we look at my recent post where I attempt a definition of aggregate demand using the information transfer model, then we see the efficient allocation of resources is not the proper frame. A market transfers any information, not necessarily useful, rational or efficient information. Transferring completely crazy information accurately is considered more of a success in this framework than transferring a correct prediction about the future only nine times out ten.

"Gold is going to a million dollars an ounce."

"Inflation will take off at any minute."

The market is "inefficient" when these statements are inaccurately transferred from the aggregate demand to the aggregate supply, not when they are wrong. Of course, for the information to be transferred accurately some element of the aggregate supply has to receive the information accurately. And the world might work in such a way that really crazy information is almost never received accurately -- the person on the other end of a gold transaction really just wants to make a not unreasonable amount of money and thinks gold is going to fall in value [1]. Maybe our normal cognitive biases are transferred accurately, yet markets work [2]. I don't know all the answers. 

I do know that assuming the information is transferred fairly accurately gives you supply and demand diagrams. It also does a good job predicting inflation. So maybe we shouldn't worry too much about rationality or individual utility functions.

At least, if you use the information transfer model.

[1] It is interesting that people who are selling gold seem in advertisements seem to put forward the attitude that one should have when one wants to buy gold ... it's a safe asset, you'll stay rich or become richer if you buy gold; shouldn't these sellers just hold on to their gold? This is the opposite of many other advertisements which usually say something like we have a lot of TVs and we can't possibly watch them all so we implore you to come on down to Crazy Eddie's and take them off our hands.

[2] I'd like to make the distinction between where an unfettered market leads to a Pareto efficient allocation and where an unfettered market leads to some sub-optimal allocation. In both cases, the information transfer may be "efficient" in the sense that the information received is the information transmitted. It may not be socially optimal.

Friday, July 18, 2014

US inflation predictions

If you looked carefully at the previous post, you might have noticed that the lines extended a bit beyond 2014. That was because I was also going to produce some inflation predictions for the US using the same procedure. Here is US inflation out to 2020 using predictors 5-20 years back (they turn out to be mostly consistent):

The same caveats apply as in this prediction of Canadian inflation: this assumes the log-linear extrapolation of NGDP and M0 using the prior 10 years holds.

Inflation prediction errors

Essentially following the procedure of the previous post -- fitting the price level function P to data from 1960 to a year Y, and then log-linearly extrapolating NGDP and M0 (currency) from Y to 2014 to find P(y>Y) and the inflation rate i(y>Y) = d log P/dt -- I thought I'd see how the errors evolve as you add more data. The log-linear extrapolations only used the past 10 years of data starting from Y.

In one sense, I wanted to figure out if the previous post was a fluke (it isn't), and also what the error would be on predicting inflation 2014 - Y years out.

Here are the inflation extrapolations colored in rainbow colors with red being the most recent and purple being the most distant past (the CPI inflation data is the green jagged line):

And here are the errors (absolute value of the mean difference between the prediction and the measured CPI) as function of years out the prediction is made (2014 - Y):

The graph shows the irreducible measurement errors in gray. Inflation is not a smooth curve and the measurement of CPI contains some fluctuations month to month. If one averages over shorter and shorter periods, even if you have the trend exactly right, you're going to get larger and larger errors. I estimated this effect by finding the distribution of errors around a linear fit to 1994 to 2007 inflation data (a fairly straight line) to estimate the error distribution. Using the estimated distribution, I ran Monte Carlo simulations of the absolute value of the mean error averaged over different time periods to produce the gray band of points. You can see that this accounts for much of the model prediction error over shorter time periods (red points on the left side of the graph above). Additionally, the 12 basis point error from Y = 2007 (7 years back from 2014) in the previous post is typical for extrapolation from that time period and likely represents only irreducible error.

That is a pretty startling piece of information. It means you likely can't do any better than the information transfer model in predicting inflation in the medium term (5-15 years out).

Thursday, July 17, 2014

Better than TIPS

Scott Sumner asked (in comments on his post on AD) if the information transfer model (ITM) was better at predicting inflation (gray line in the graph above) than TIPS spreads (the difference between inflation indexed treasuries and ordinary treasuries of the same maturity, red jagged line in the graph above). The TIPS spread on a given day represents the market's future expected average inflation over the maturity of the treasury (we'll use the 10-year).

Well, the ITM is better -- about 5 times better. Actually, the ITM was better at predicting inflation even though the worst economic crisis since the Great Depression intervened!

I fit the ITM model to 1960-2006 Q4 data and then did a log-linear extrapolation of NGDP  and M0 (currency in circulation) starting in 2007 to predict inflation from 2007 to 2014 Q1. That's the blue line in the graph above.

The 10-year TIPS spread from Q1 2007 represents the market's best guess at the average inflation rate over the next 10 years, and so should also represent the average inflation rate from 2007 to 2014 Q1. That's the red straight line.

The ITM model average difference from 2007 to 2014 was -12 basis points, while the TIPS model was on average off by +66 bp.

Actually, the ITM totally dominates the market prediction -- the 2007 prediction of the ITM was better than the TIPS prediction for almost every date you start the TIPS prediction. This graph shows the ITM 2007 prediction difference alongside the TIPS prediction from the given year:

The ITM prediction from 2007 was a better predictor of inflation in 2013 than the TIPS spreads from 2013! The ITM model falls apart in the last couple months, but then the past couple months only represent a couple CPI data points. The ITM model represents a long run trend, so its predictions will have a higher error over short runs of data. The market random guess TIPS spread is better at short runs because in the short run, inflation this month is about what inflation was last month.

PS It seems the TIPS spread is a good predictor of the TIPS spread though.

Wednesday, July 16, 2014

Aggregate demand is aggregate information

Scott Sumner asks [1] what aggregate demand (AD) is:
So what do [economists] think AD is? ... What is held constant along a given AD curve?  Presumably a given AD curve is supposed to be holding constant things like monetary and fiscal policy, animal spirits, consumer sentiment, etc.
The information transfer model has a simple answer: AD is a source of information that is received by the aggregate supply (AS), with the price level (i.e. the "price" in the AD/AS model) detecting the information transfer.

For a single good we have something like "I would like a pound of bacon for 6 dollars" (demand information) is received by a physical pound of bacon (supply) being sold ... with the price paid "detecting" the information transfer.

Of course, you have people who would buy bacon at 2 dollars or at 10 dollars. This information is "lost" at the supply; in the former case because the sale doesn't happen and in the latter case because the extra 4 dollars is not collected.

This information from the demand includes all kinds of expectations and theories as well. "I think the price of bacon will rise tomorrow to 8 dollars" is also communicated by a purchase of bacon at $6 today -- but again with some information loss. "Bacon is yummy at any price" or "I am making bacon wrapped shrimp for dinner tonight" also comes through, but again with some information loss.

In this way, information received by the supply (IS) is generally less than information received by the demand (ID), or IS < ID. However, IS ~ ID is a good approximation for a functioning market [2]. And only in the case of IS ~ ID do you get supply and demand curves (otherwise you get supply and demand "regions" bounded by the supply and demand curves).

This picture means that IAD is the aggregate of all kinds of economic information, from everyone's future expected path of monetary policy, exchange rates or the size of the tech sector to the current price of commodities and your paycheck. AS is the set of all the things that get made and/or sold because of that information. Typically, IAD ~ IAS is a good approximation in a functioning market, and that gives you AD and AS curves which represent the effect on the price level of a given set of information being sent (AD) or received (AS). (These curves then intersect at some value where AD = AS.)

If you gathered up all the information people put forward at one point in time (think of it as one grand "specification" for an economy ... like an engineering spec) and then varied the amount of stuff actually produced by the economy, that would trace out the "AD curve". The price level falls as you produce more stuff to that paricular specification. In that sense, you can say the entire curve is "the" AD ... at one point in time. This answers Sumner's question about what is held constant along an AD curve. [3]

Now maybe this is just my opinion, but I think this paints an extraordinarily clear picture of what AD is.

[1] This post is an adaptation of my comment on Sumner's blog.

[2] I made the argument once before that maybe ID ~ IS is a condition required to have a sensible theory of macroeconomics.

[3] For another take, from the perspective of thermodynamics, see here. That specification is the "demand bath" analogous to a heat bath in isothermal processes.

Monday, July 14, 2014

If physics blogs were like economics blogs

I recently came across some interesting new work in computational cosmology, this time in a paper from Jan Ambjorn, Timothy Budd that lent itself to some rather beautiful 2-dimensional universes. Check out this video for some of these 2D universes embedded in 3D:

Causal dynamical triangulation may potentially be the long-sought quantum theory of gravity. It constructs universes from tiny triangles ("simplices"). Older but similar "brute force" approaches failed, producing universes with a low number of effective dimensions. However this approach assumes one extra ingredient: causality. Complexes with causality-violating simplices are forbidden, reducing the number of possible configurations. This results in universes with the "correct" number of effective dimensions.

Posted 07/14/2014 11:14am

Comments (13)

Ted A. Thomas III I've spent 30+ years as a manager of a piano moving company, so I think I know a thing or two about gravity. All I can say is that this isn't how gravity works in the real world. You have to apply force perpendicular to the ground plane in order to move objects along the z-direction.

Anonymous LOL this is completely wrong

jeffkeen22 This ignores what Aristotle said: everything moves toward its natural place.
Anonymous Ah, but Aristotle got heliocentrism wrong. 
jeffkeen22 Heliocentrism was the downfall our society. We're living in a dark age now. 
Anonymous You realize Aristotle was a geocentrist right?
unlearningphysics Newton knew about this all along. He's credited with discovering calculus but he used trigonometry in his Principia for a reason.

Paulite2000 You liberals think government can fix everything.
progman2000 It can!
StudentofPhilosophy I've long thought gravity is an illusion invented by physicists to keep ordinary people from being allowed to speak on the subject. Remember -- Newton's inverse square law is guilty of what Einstein called spooky action at a distance. Yet astrophysicists and aerospace engineers still use Newton's flawed equations to run our society. I was completely astounded when I saw this:

Physicists Pin Down Value Of Newton's Gravitational Constant

How can they pin down something that Einstein says is fundametally flawed? I was a philosophy major in college, so I am well trained in being able to spot the fundamental contradiction and circular logic.

oldisnewagain "Once upon a time a valiant fellow had the idea that men were drowned in water only because they were possessed with the idea of gravity. If they were to knock this notion out of their heads, say by stating it to be a superstition, a religious concept, they would be sublimely proof against any danger from water. His whole life long he fought against the illusion of gravity, of whose harmful results all statistics brought him new and manifold evidence. This valiant fellow was the type of the new revolutionary philosophers in Germany. "
aphysicist Actually that may be more correct than you know -- gravity may be an entropic force and the local direction of the gravitational field is the direction of the holographic dimension.
timecube4444 1000 dollars to anyone who can disprove the Harmonic Cube


PS This is kind of a half-baked joke, but apparently I was scooped in concept by John Cochrane of all people.

PPS I really think the CDT group is doing really neat stuff -- actually that is what I devoted my spare time to playing with before I took up the economics blogging. I was hoping to speed up the computations with GPU processing, but it appears they've already gotten there.

Sunday, July 13, 2014

In defense of equilibrium

I signed up for the comment updates at John Quiggin's post and they have been flooding in for the past couple days. I would like to say that the comments are very thoughtful and erudite (i.e. spelled correctly). Crooked Timber seems to have that effect. One thread that was pervasive was discussion of equilibrium, in particular a reference to Krugman's blog post where he describes "useful economics":
"So how do you do useful economics? In general, what we really do is combine maximization-and-equilibrium as a first cut with a variety of ad hoc modifications reflecting what seem to be empirical regularities about how both individual behavior and markets depart from this idealized case. And people using this kind of rough-and-ready approach have done really well since 2008, on everything from inflation to interest rates to the effects of austerity."
Now the definition Krugman is using here is effectively the "ADM" equilibrium -- there exists a set of prices so that excess demand is zero (or, there exist a set of prices that clear the market). It is important to keep that definition in your head though the rest of this post, because many comments seemed to see this as some sort of shocking revelation that the entire edifice of economics was based on a massive logic fail. For example:
"As an operational model, maximization-and-equilibrium isn’t ideal — it doesn’t make any sense at all. In a world of radical uncertainty, where people do not know what they do not know, 'maximization' can not defined. 'Equilibrium' is even more vacuous — in practice, it comes down to what pedants call, the ergodic hypothesis, a crazy idea that history doesn’t matter — particularly when you consider that capitalism is one strategic investment after another in making history matter."
Strategic investment is "maximization", so I'm not sure what that's all about. Plus the other side of that "investment" is losing out and so there's your egodicity. Snark aside, in the past history has not seemed to matter -- at least with regard to economic variables. The economy appears to return to a long run trend (Milton Friedman's plucking model). This has been a major debate: does RGDP growth have unit root? The argument is inconclusive based on the data alone and you need to assume and underlying economic model in order to say whether the ergodic hypothesis holds or fails. You can't just claim ergodicity is crazy without specifying a model. In the information transfer model, the "ergodic hypothesis" seems to hold. That is evidence that if ergodicity appears to be crazy in your model, then your model is likely crazy.

These are the comments that set me off though, pushing me to become the non-economist champion of the besieged economists:
"By hammering at equilibrium no matter how often it fails to map our experience with the world, academic orthodox Econ simply refuses to notice a problem."
"... equilibrium (demand equals supply) is probably not a good way to understand a dynamic, chaotic, system characterized by violent reversals."
These comments seem to suffer from availability heuristic -- only the newsworthy economic events seemed are the valid subject of scientific economic inquiry.

And in their defense, there aren't many news stories on Bloomberg where supply met demand through the price mechanism yet again. Even the massive failure of macroeconomic management that was the great recession seemed to only shave about 3.5% off of NGDP. That's right -- about 96.5% of transaction value (more than 14 trillion dollars) was basically unaffected. Bloomberg disproportionately fixated on the 3.5%; so do we all.

And how many violent reversals have you seen in the price of bacon? And food is one of the more volatile prices out there -- hence why "core CPI" leaves out food.

The attacks on economists only get more confused:
"One of the worst ideas ever is that of economic equilibrium to which things return. Better would be a sense of history and a realistic view of the world we live in. Exponential growth cannot continue forever. But capitalist economies have relied on that from the beginning."
This elides the definition of equilibrium -- how do you "return" to an equilibrium in the presence of exponential growth? That exponential growth is the "equilibrium", and if exponential growth halts [1], that will be the new equilibrium, so that's a non sequitur.

I actually have a related discussion here. Before you jump on the fact that I say that equilibrium does not exist, note that I am referring to a specific equilibrium: constant inflation and constant interest rate over the long run.

Anyway, an equilibrium in the economics sense is not some status quo to which things always return. It is (remember the definition) a set of prices where aggregate supply meets aggregate demand, i.e. clear the market. I'd say that's pretty realistic view of the world seen with a sense of history: I don't see a lot of excess demands in the US (except traffic, which is a failure of the price mechanism on the scarce supply of roads at a given time) and there aren't a lot of excess supplies (except maybe in some cases of government subsidies for corn and wheat).
"The more I read about equilibrium analysis the more I couldn’t believe that anyone would begin to imagine it might say something interesting about human behavior. What was worse, I found out that in practice, nobody could point to an example of a equilibrium model that matched the data."
This is where the madness creeps in. All I could say to this is: Really? Does this commenter go out to the supermarket thinking the price of bacon will suddenly be 30 times as much one day and half as much the next, with bacon filling the entire grocery store? We operate every day with prices that are roughly the same as yesterday's and that most products you bought yesterday will be on the shelves again today. The whole concept of a price system without massive surplus or shortages, e.g. a supermarket, is a prediction of equilibrium! I should be careful. Maybe that commenter lives in Somalia and is trying to use bitcoin.

Sure there is a trend towards higher prices (inflation), but that is not inconsistent with equilibrium -- if the trend is set by monetary policy, then that trend represents equilibrium and prices return to that trend assuming there is not some massive supply or demand shock.

Another (snarky) way to put it is that a naive market equilibrium employment rate is predicted to be 100%. The current value? 93.7%. Here is a graph of the naive equilibrium model and the slightly improved natural rate model:

Economists don't tend trot this plot out (maybe they should?) because they're more interested in the deviations from equilibrium. It's actually rather amazing that the model should work this well!

That brings us finally to this comment:
"Back in the day, to derive an equilibrium analysis, you had to find a constraint, such as a conservation law."
While I definitely sympathize with this approach (see e.g. here), this is a very narrow definition of equilibrium. For example, this leaves out ecological equilibria (maybe there is some as yet undiscovered conservation law?) or the global climate thermal equilibrium (which is actually a product of a non-equilibrium system).

The scientific way to go about an equilibrium analysis in this framework is first to observe what appears to be an equilibrium in the data. You then say as a scientist: maybe this is an equilibrium, and here is the constraint. This "constraint" is a statement of an empirical regularity. For example, the Phillips curve represents a loose empirical regularity between the inflation rate and the unemployment rate. The equilibrium model is then that shifts in unemployment cause shifts in inflation.

Sure the Phillips curve seemed to break down forming the basis of the Lucas critique (that says, in this framework, maybe empirical regularities aren't the way to go about discovering those constraints), but as Paul Krugman points out -- this process of identifying a constraint and deriving and equilibrium analysis is exactly how Krugman says economists go about doing useful economics.

[1] Noah Smith has an excellent takedown of a physicist (shame!) who I am a a single degree of separation from (more shame!) making the same clumsy claim as the commenter, that exponential growth has to stop.

Thursday, July 10, 2014

Remarkable recovery regularity and other observations

John Quiggin wrote an interesting post about the failure of search theory's prediction that with the advent of the internet (reducing search costs and times), unemployment should drop faster. I commented on the post, citing my attempt at matching theory using the information transfer model. I added that because the model depends only on the number of hires and vacancies and does fairly well completely ignoring any microfoundations, search costs and the internet may have nothing to do with it.

The comment thread has many "just-so" stories (or random rants about the government, economists or the plutocracy), however one commenter had an interesting take from queueing theory: the delay is proportional to the buffer size. With the internet also increasing everyone's buffer size, the gains from reduced cost and search times are eaten up and it is a wash. This is actually a measurable thing (the number of candidates businesses go through could be surveyed), so maybe it could have some explanatory power.

My opinion is that searching for a mechanism is something of a Sisyphean task. The thing is: recoveries in employment have remarkable regularity. I've noted the regularity before; here is another graph where I've excised the unemployment increases and show lines of the same slope (fit to the current recovery):

This regularity over several decades would imply that any mechanism that explains the rate likely has nothing to do with the internet, inequality, jobless recoveries, war, government spending, unemployment benefits, Keynesianism, monetarism, technology, ... etc. It is doubtful these different forces conspire in differing degrees to achieve approximately the same result every time.

I also wanted to repost the "metastability" I observed in the unemployment rate I talked about in this post just for fun.

Wednesday, July 9, 2014

Information transfer is a state of mind

I am promoting a response to a comment from Tom Brown to a post. Tom's question was: "How would you describe [the original arXiv paper] to someone?"

I personally like the first line of Fielitz and Borchardt's abstract as an answer: "Information theory provides shortcuts which allow [one] to deal with complex systems." The specific thrust of their paper is that it looks at how far you can go with the maximum entropy arguments without having to specify "constraints". This refers to partition function constraints optimized with the use of "Lagrange multipliers". In thermodynamics language it's a little more intuitive: basically the information transfer model allows you to look at thermodynamic systems without having defined a temperature (Lagrange multiplier) and without having the related constraint (that the system observables have some fixed value, i.e. equilibrium [1]).

This is how the information transfer model allows you to use maximum entropy while liberating you from having to either identify a "temperature" or even be in equilibrium. The downside is that the resulting approach has very limited dynamics and may not tell you much.

However this makes it a great approach to economics because it's a complex system with large numbers of things swirling about and there is no real concept of "temperature" or (sorry, economists) equilibrium. That is to say, more generally, there are no constraints in an economic system to use Lagrange multipliers for. There are no few identified conservation laws in economics, so there is nothing little that has a fixed value [2].

The resulting economic information transfer model is successful but with some pretty limited scope. It doesn't capture the "business cycle"; it captures long run trends. However that alone can be seen as a significant success (in my view). It also makes some predictions about low growth and low inflation in the future of advanced economies. Even the rudimentary information transfer model approach may be way more well defined than the current state of economics that has to resort to "expectations" (macro outcomes may be dependent on whether people consider the central bank to be "credible") and where there still are arguments about how money works.

Interestingly, the partition function approach I've been using seems to say is that there may actually be a Lagrange multiplier/temperature we can use that's a function of log M0 (currency component of the monetary base) and the "constraint" may be something that reduces to the quantity theory of money in certain limits (an economy in equilibrium is one that satisfies the quantity theory to some approximation [3]). In this view, a large economy with a large monetary base may be thought of as a "cold" economy that has low inflation and mostly has its low-growth markets occupied (like a cold thermodynamic system has its low energy states occupied).

[1] The constraint is generally that the system have some fixed energy, which involves both energy conservation and thermal equilibrium (i.e. the energy isn't changing).

[2] After originally writing this sentence, I wanted to make sure it was true. It turns out Samuelson identified a conservation law and Ramsey's bliss point may be seen in this light.

I also found this quote by Samuelson:
There is really nothing more pathetic than to have an economist or a retired engineer try to force analogies between the concepts of physics and the concepts of economics. How many dreary papers have I had to referee in which the author is looking for something that corresponds to entropy or to one or another form of energy. Nonsensical laws, such as the law of conservation of purchasing power, represent spurious social science imitations of the important physical law of the conservation of energy; and when an economist makes reference to a Heisenberg Principle of indeterminacy in the social world, at best this must be regarded as a figure of speech or a play on words, rather than a valid application of the relations of quantum mechanics.

[3] As Bennett McCallum puts it: the quantity theory is not just the equation of exchange. It includes long run neutrality. I've made the supposition before that long run neutrality may be an approximate symmetry of economics. This symmetry, via Noether's theorem (vaguely), may be related to the "conservation law" given by the quantity theory. The analogous situation in thermodynamics is that time-symmetry leads to energy conservation (a thermodynamic system that is constant in time is one that is in equilibrium with a fixed value of energy).

Worthwhile Canadian Prediction

Nick Rowe has a piece today on Canada's monetary policy targets and instruments, in particular talking about simple rules for conducting monetary policy and their interpretation. Both Rowe and Scott Sumner have pointed to Canada as evidence that central banks can achieve whatever inflation rate they would like.

I'm going to put forward a prediction, using the information transfer model, that Canada will either undershoot its inflation targets or will have produced significantly more currency than the current log-linear trend.

Here is the price level (CPI, all items) extrapolation using the log-linear extrapolation of NGDP and M0 [1]:

And here is the extrapolated M0 with the required M0 to produce 2% inflation shown in red:

The interesting thing about this prediction is that it should start to become apparent by the end of next year (Dec 2015). I will add that this actually makes me a little suspicious of the model itself, and is therefore part of the reason I'm putting this prediction out there. One should always be suspicious of predictions that say behavior is just about to change (e.g. we're on the verge of massive inflation; we're on the verge of secular stagnation).

Additionally, there is a possibility of a "squishy" outcome by the end of 2015: the currency base may increase a little and CPI may undershoot a little -- that would just push the decision point out a year or so.

[1] Model details:


P(N, M) = p0 k(N, M) (M/m0)^(k(N, M) - 1)

k(N, M) = log(N/c0)/log(M/c0)


c0 = 0.2383 (billions of Canadian dollars)
m0 = 148.9 (billions of Canadian dollars)
p0 = 1.049

I used a simple log-linear extrapolation for N = NGDP from FRED and M = "M0" the currency component of the monetary base using Bank of Canada note liabilities data. Here P is the CPI, all items from FRED.

Note that k function is the inverse of the usual kappa; it simplifies the equation a bit.

Tuesday, July 8, 2014

Why printing more money could have done nothing

I agree with everything this article says about preventing the Great Recession except for the fact that it should refer to a hypothetical Great Recession that happened in 1970 ([1], [2]).

From Russia with love

Commenter Tom came over for a visit to this blog through a link at Marginal Revolution. Among his objections to the information transfer model included the scenario where Russia experienced inflation with the expansion of its monetary base while the US did not experience inflation with its expansion. However, this is exactly the kind of thing the information transfer model explains. Russia appears on the left side of this graph, where the slope of the curves P(M0) and NGDP(M0) is much greater:

Russia is still on the "quantity theory" side of the graph, not the "liquidity trap" side (the right hand side). Here are the fit to the price level and the information transfer index, for reference (the lack of NGDP data for Russia before 2003 limits the range over which I could test the model):

I show the GDP deflator, CPI and CPI less food in the graph above.

Friday, July 4, 2014

Notes from Ben Bernanke and the P* model

In doing the research for my post today I came across this speech from Ben Bernanke in 2006. He references an inflation model that uses M2 called P* that made it to the front page of the NY Times. In the speech, Bernanke tells us that M2 was growing too slowly:
Unfortunately, over the years the stability of the economic relationships based on the M2 monetary aggregate has also come into question. One such episode occurred in the early 1990s, when M2 grew much more slowly than the models predicted. Indeed, the discrepancy between actual and predicted money growth was sufficiently large that the P* model, if not subjected to judgmental adjustments, would have predicted deflation for 1991 and 1992.
He had noted earlier in the speech that M1 also broke down (so people became more interested in M2):
For example, in the mid-1970s, just when the FOMC began to specify money growth targets, econometric estimates of M1 money demand relationships began to break down, predicting faster money growth than was actually observed.
Problems with the narrow monetary aggregate M1 in the 1970s and 1980s led to increased interest at the Federal Reserve in the 1980s in broader aggregates such as M2.
The big take-away:
Unfortunately, forecast errors for money growth are often significant, and the empirical relationship between money growth and variables such as inflation and nominal output growth has continued to be unstable at times.
So how well does the P* model do? This is from the Cleveland Fed:

It seems that the P* model does a little bit better than the information transfer model during that period (the IT model is a model of the CPI instead of the deflator):

Of course, the benefits of the IT model include having only 3 parameters compared to P*'s 7 or 8 parameters (I've seen anything from 6 to 9 in various references), along with the capability to do this:

In this graph just above, I've used the parameters fit to 1960-1990 and show the results for 1990-2014. P* as noted by Bernanke above started to predict deflation in 1991-1992. Here is the inflation rate from the IT model corresponding to the "out of sample" prediction above (note there's no deflation):

What does "model" mean?

In my challenge to macroeconomists to show a theoretical model of the price level or inflation compared to empirical data, I got a couple of responses that put forward quantity theory models where P = k MB (which I noted at the beginning of the post) or NGDP = k M2; also David Andolfatto asked what I meant by model (H/T Tom Brown).

To answer David's question, I was being pretty loose with what I meant by model. I intended model to mean a set of equations that are motivated by some theory, that then take some empirical inputs to define the parameters and subsequently output some other variable. On the surface, NGDP = k M2 fits this definition. You fit the parameter k so that k M2 fits NGDP and then the subsequent model NGDP = k M2 outputs NGDP given a value of M2. The theory motivating the equation is essentially the quantity theory of money  including the suppositions that "velocity" is stable in the long run.

But what does it mean when we say NGDP = k M2? Are we saying that increased M2 causes NGDP to go up? That is the exogenous money assumption in the quantity theory. Are we saying that, starting from an initial condition M2(0) and NGDP(0), there is a complex relationship where M2 causes NGDP to grow which causes M2 to grow, which feeds back to NGDP, etc, much like the changing electric and magnetic field generate each other in a radiating electromagnetic wave? This is something closer to Wicksell's endogenous money. Is NGDP = k M2 a long run equilibrium, where market fluctuations occur around it? This is closer to Milton Friedman's view.

As an anonymous commenter noted, the relationship between NGDP and M2 can break down (e.g. in Japan), which implies that NGDP = k M2 is not the real model, but instead an approximation to some other model. However, the reason we have M2 as opposed to just "M" is because the relationships between macro variables and the aggregates broke down over time [pdf]:
Although financial innovation has been an important factor, the evolution of the Federal Reserve Board staff's definitions of monetary aggregates primarily been governed by economists changing empirical perceptions of the appropriate concept of money.
That is M2 is constructed to be a better indicator than M1 of some economic variable. That is to say, the monetary aggregates definitions over time has been changed in order to make them fit better to e.g. NGDP. Where does that leave us? We've defined M2 to match NGDP and then we turn around and say that NGDP = k M2 is a model of the economy? This is a completely circular argument: our model uses NGDP to define M2, which we use to show how our model compares to NGDP.

That's why I would put more trust in a theoretical monetary model that looks at the monetary base or MZM which have definitions independent of macro variables. The former is essentially printed currency, although it does contain reserves which means that the measure is "adjusted" for reserve requirements, which, if your model has some important place for e.g. excess reserves or interest on reserves, should bring it under suspicion (are you sure the effect caused by excess reserves isn't used to define the adjusted monetary base?). The latter aggregate (MZM) is a pretty nice definition of money. It ostensibly includes anything that has effectively zero maturity making its definition independent of macro variables (it seems to have a connection with the long term interest rates in the information transfer model).

Now I didn't call for this in the original challenge, but I would like to take what a model should be a step further. Regardless of the problems listed above with NGDP = k M2, there is the additional question of what does the model mean? I could easily see a story where causality goes the other way -- the size of the economy (NGDP) dictates how much money is created by banks via fractional reserve banking (part of M2), so that the growth rates of both are fairly correlated. Is the story really that banks create money through fractional reserve banking which causes the economy to grow? How does that work?

That's where the information transfer model comes in. First, empirically, it's just currency, not M2. What money does is allow people to move information around. Yes it's a medium of exchange, but it's also a unit of account -- the latter is better stated as a unit of information. When the Treasury prints new currency and the Fed releases it into circulation, more information coming from the aggregate demand (NGDP) can be "captured" by the new base money, causing the economy to grow. And the equation is really log NGDP ~ k log M0 where M0 is the currency in circulation (and k changes).

How good is the price level function approximation?

In this post I defined the partition function approach and noted that the "ansatz" (i.e. fancy guess)

\text{(1) } P = p_{0} \frac{1}{\kappa (NGDP, M0)} \left( \frac{M0}{m_{0}}\right)^{1/\kappa (NGDP, M0) - 1}

seemed to be a pretty good approximation to

\text{(2) } P = p_{0} \langle a (m/m_{0})^{a - 1} \rangle

which I intend to explore further in this post. First, I needed to see how the expected valueof $NGDP \sim \langle m^{a} \rangle$ (in 100 random markets again) worked against the empirical data. In the following plot I show the equation (black) along side the data (blue) in both log scale and linear scales:

This was a two parameter fit: and overall normalization of $NGDP$ and the relative normalization of $m$ so that 

$$ NGDP = n_{0} \langle (m/m_{0})^{a} \rangle $$

This fit was then used in the price level ansatz equation (1) and compared with the numerical evaluation of the expectation value equation (2). What I am doing here is trying to figure out how well the functional form (1) approximates the "true" solution (2). It turns out it fits pretty well:

The ansatz (blue dashed), which was motivated through some squishy arguments [1], is a pretty good approximation to the exact solution (black), both of which fit pretty well to the data (green), again shown in log and linear scales. This means that the approach to macroeconomics taken on this blog has some pretty solid grounding.

[1] Equation (1) uses the definition of the information transfer index as counting the number of symbols and posits that the number of symbols in the demand is proportional to NGDP, while the number of symbols in the supply is proportional to the money supply. Additionally, there is an assumption that changes in the information transfer index are slow (compared to changes in the size of the economy or the money supply) so it can be taken out the integral.