One thing to keep in mind when using mathematics to describe physical reality is that you have to be very careful about taking limits. In pure mathematics it's generally acceptable to send a variable off to infinity,

*m → ∞*. If you are using mathematics to describe physical reality, then*m*might just have 'dimensions' (aka 'units') so sending a dimension**ful**number off to dimension**less**infinity (or zero) can give you weird results.
Generally, if, say,

*m*is a mass (with units of kilograms) the only way you can send it off to infinity or zero is to have another scale (say, another mass*M*with units of kilograms) to compare it to. You can send*m/M → ∞*or*m/M → 0*... or better yet, as physicists tend to put it:*m/M*>> 1 or*m/M*<< 1 (i.e.*m*>>*M*or*m*<<*M*).
I do my best to be very careful about this (I probably have made some mistakes). However, I don't think economists care about this

*at all*. For example, Paul Romer and Robert Lucas both take limits where time*T*and a time scale (*1/β,*where*β*is a rate) go off to infinity simultaneously. In pure math, this produces an issue of almost uniform convergence; however, if you're using math to describe physical reality then it is nonsense to send these two dimensionful scales off to dimensionless infinity simultaneously. A (possible, since it was EJR) econ student also had no idea about this, which makes me think this attitude is very widespread. The only sensible thing you can do is look at*T β*>> 1 or*T β*<< 1. Other limits are**nonsense**.
Nick Rowe doesn't seem to have a problem with sending dimensionful numbers to dimensionless infinity either (which I wrote about yesterday), sending time steps

*dT*to zero when he should really be looking at the relationship to the other physical scale in his theory -- the delay in the onset of the end of fiscal stimulus*dt*. This carelessness creates nonsense results.
Noah Smith, John Cochrane and Michael Woodford (see Noah for one-stop shopping, and see slide 30 of Woodford's presentation [pdf] [1]) all make this same error when talking about neo-Fisherism in terms of permanent rate pegs and expectations infinitely far in the future. And if Woodford is considered the Ed Witten of economics (per Noah in the link above), that doesn't bode well for

*any*economist knowing about how to use math to describe reality. It also makes me think the whole neo-Fisherite view may just be an artefact of poor dimensional analysis (something that I will be looking into ... )
The problem is that economists don't see this as an issue. And they don't seem to take kindly to physicists trying to tell them what to do. When Chris House says "[Physicists'] mathematical abilities are actually not that much better than most economists (if they are better at all)" my spidey sense starts suggesting the reason is probably Dunning-Kruger [2]. The second paper on his list on his website ("Layoffs, Lemons and Temps") has a individual firms with "production function[s] with the usual properties" which in the footnote contains two nonsense limits where the number of workers

*n*goes to zero and infinity ... treated as if it's an everyday assumption (even relegated to a footnote). They are in fact everyday assumptions in economics (called the Inada conditions)! The only sensible limits in that case would be (for instance)*n/N*>> 1 or*n/N*<< 1 where*N*is the number of firms (in English, a few firms have a large number of workers versus many firms have a small number of workers). Another way would be to compare to the population size (does everyone work for a few companies,*n/P ~*1, or do very few people work for the same company,*n/P*<< 1). I don't think this impacts the results of House's paper, but it is careless mathematics. At least the Inada conditions are described in terms of a pure mathematical function with dimensionless inputs.**Update 11/9/2015:**

Even if the reason for finite number of belief updates in footnote [1] is that they cost some amount of money

*dm*(or people are just*n*-smart), you still can't send dimensionful time*T*to dimensionless infinity when things happen in your model that take finite amount of time (or have some finite timescale such as the decaying functions on Woodford's slide 26). The version where revisions take a finite time*dt*is just one possible way to make the limit make sense -- not necessarily the only way. It's sort of like the example above with Chris House's paper where I used*N*and*P*. The issue is that there has to be**scale that the number of workers***some**n*is large compared to be it the number of firms or the total population, and there has to be some timescale that time*T*is long compared to.**Footnotes:**

[1] Woodford takes two limits of

*n*→ ∞ and*T*→ ∞ where*n*is the number of 'belief revisions'; these revisions obviously take some finite time*dt*(or else you could do an infinite number of revisions instantaneously), so the only sensible (in the sense of using math to describe reality) limits are*n dt/T*>> 1 or*n dt/T*<< 1. The first says that belief revisions take longer than the time horizon of the interest rate peg (interest rates stop being pegged before you fully revise your beliefs -- which doesn't seem like a very long peg); the second says you revise your beliefs to a high order before the interest rate peg ends (which actually makes more sense). The limits that Woodford takes (*n dt*→ ∞ and*T*→ ∞ simultaneously, in both orders) don't make any sense.
[2] I've sort of come through the rabbit hole on this one. In my first forays into econ, I basically thought economists were just fine with math. I tended to defend econ from usurping physicists (here and here). But these limit problems (in a paper by preeminent economist Michael Woodford, no less) coupled with Paul Romer's diatribe against mathiness and Nick Rowe on the RCK model makes me think that maybe economists don't really know what they are talking about.

"my spidey sense starts suggesting the reason is probably Dunning-Kruger [2]"

ReplyDeleteThat's the first time I've heard of "spidey sense" but far from the 1st time I've encountered Dunning-Kruger. Come to think of it, that might make a good blog handle for myself: Dunning Kruger, or just "D. Kruger" for those in the know... Lol.

It's these kind of posts that are really fun to read Jason... you just don't see this kind of thing elsewhere (that I'm aware of) on macro blogs, and I would have never thought of this limit problem myself.

DeleteIt's these kind of posts that are really fun to read JasonAgreed!

Thanks Tom and Todd.

DeleteWoodford doesn't have a blog does he? I know JP Koning emailed him once "on a whim" with a question and was surprised to get a response. I wonder how he might response to this...

ReplyDeleteI don't know. I might try Paul Romer first since he's on Twitter.

Delete"It also makes me think the whole neo-Fisherite view may just be an artefact of poor dimensional analysis (something that I will be looking into ... )"

ReplyDeleteJason,

A few weeks ago I posted some comments (see below) regarding one of John Cochrane's papers into his blog. I wonder if you would comment? The paper can be found at:

http://faculty.chicagobooth.edu/john.cochrane/research/papers/cochrane_policy.pdf

Henry

John,

I have been studying your paper “Monetary Policy with Interest on Reserves”. I am wondering if you could clarify some issues for me.

.

The first equation you present says essentially (dispensing with the summation operator for clarity):

.

Nominal value of bonds/price deflator = expectational operator X real discount rate X future surpluses

.

To me this is a tautology, an accounting relationship almost and not a functional relationship at all.

.

You then switch price deflators around and end up with:

.

Nom. Value of bonds/price deflator X expect op. X inflation index = expect op. X real dis. rate X future surpluses

.

To me this is still a tautology with some manipulations added.

.

You then immediately conclude that:

.

-unexpected inflation is determined entirely by expectations of future surpluses

-govt. can determine expected inflation by nominal bonds sales.

.

To me, this is like saying the following:

.

Weight of box of apples = no. of apples X unit weight (i.e. a tautology)

.

Weight of box apples X price index = no. of apples X unit weight X price index

.

And then conclude:

.

Price index is a function of the no. of apples (just because these variables are on opposite sides of an equality sign) which of course is absurd.

.

I’m afraid this does not make sense to me.

.

Can you clarify?

.

Henry

Replies

John H. CochraneSeptember 2, 2015 at 8:49 PM

Is price = expected present value of dividends a tautology? It's the same equation. Of course, if you discount at the ex-post rate of return it is a tautology, so in some sense both equations require you to be specific about discount rates to be useful. But price = expected present value of dividends seems to be very useful, and, again, it's the same equation.

AnonymousSeptember 2, 2015 at 10:20 PM

I can begin to see that adding expectational operators appears to turn the equation into a functional relationship, although I'm not quite convinced or can't quite see it. If you took your price = exp'd pv of divs equation and multiplied both sides, to preserve the equality, by the price of fish and then you could say the price of fish is a function of the exp'd pv of divs. That's what it seems you are doing with your first few equations in your paper.

Going back to apples, if you said:

Weight of box = no. of apples X unit weight

and

unit weight = funct(rainfall, sunlight, fertilizer)

then

weight of box = no. of apples X funct(rainfall, sunlight, fertilizer)

This is clearly a functional relationship. It bears a hypothesis which can be tested.

It seems to me the price = e'd pv of divs isn't quite the same sort of relationship and heading for the realm of tautology in which case nothing new is being said.

Henry

Hello Henry,

DeleteI think you have a great way of illustrating the expected value in the asset pricing equation.

weight of box = no. of apples × expected_weight(rainfall, sunlight, fertilizer)

Cochrane is just using the asset pricing equation to do the same thing -- when he says "price = expected present value of dividends", he is saying (in your analogy):

market_cap = no. of shares × expected_stock_price(business conditions, sales, costs)

There is a bit of future discounting (so present value) going on so we say:

market_cap = no. of shares × discount_factor × expected_stock_price(business conditions, sales, costs)

And then if you divide by the number of shares (market cap/number of shares = stock price), you get:

stock_price = discount_factor × expected_stock_price(business conditions, sales, costs)

The discount factor is usually considered to be based on expectations so we say:

stock_price = expected_stock_price(discount_factor, business conditions, sales, costs)

The right hand side is just the expected present value of dividends.

In general the asset pricing equation relates two prices at different times, so unless nothing happens in the economy, these will always be different.

Also, I wrote up a different take on the asset pricing equation here:

http://informationtransfereconomics.blogspot.com/2015/05/the-basic-asset-pricing-equation-as.html

Jason,

DeleteI was wondering what you thought of Cochrane's apparent use of what I call a tautology, then adding to both sides of the equation a price and then drawing a function relationship between the introduced factor and a variable on the other side. Seems very odd to me and a fragile way to launch an argument.

I was trying to show it's not any more of a tautology than:

Deleteweight of box = no. of apples × expected_weight(rainfall, sunlight, fertilizer)

He also derives it in section 8.1. The first equation in the paper is the equation immediately before where it appears in section 8.1 (bottom of page 41) with c = y and the last term set to zero by a transversality condition.

Whatever the issues with transversality conditions, they are not tautologies. Therefore Cochrane's equation is not a tautology, but rather some kind of model.

Thanks Jason, I'll have a read of your transversibilty conditions post.

DeleteAs I mentioned in my second post to Cochrane I could concede that introducing expections into the equation adds a functionality. Let's say we put aside the question of tautology, is there still not an issue with the idea of multiplying both sides of the equation by a price index and then arguing that the price index is a function of some other variable on the RHS of the equation?

Sorry Jason, working from memory, I've stuffed up. What Cochrane does is introduce a price index on the LHS and juggles it so that the LHS remains mathematically unchanged. Then he asserts there is a relationship between the price index and the RHS.

DeleteI'm not sure I understand.

DeleteEquations (2) and (3) in Cochrane's paper add up to give you equation (1), i.e. (1) = (2) + (3), where (2) is the 'expected' part and (3) is the 'unexpected' part. There's no multiplication by a price index -- there's a multiplication by 1, i.e.

1 = Et (Pt-1)/(Pt-1)

The expected ratio of the price level at time t-1 to the price level at time t-1 -- at time t. Pt-1 is known at time t, so this is just 1. So if you multitply the LHS of Cochrane's eq (1) by that factor you get:

(Bt-1)/(Pt) Et (Pt-1)/(Pt-1) = (Bt-1)/(Pt-1) Et (Pt-1)/(Pt)

Because you can pass the Pt (price level at time t) through the expectations operator (we know the expected price level at time t at time t).

“I'm not sure I understand.”

DeleteJason,

I apologize. This is entirely my fault as I have not explained myself properly.

Before I proceed, I read and attempted to understand your blog on transversality conditions – mostly failed however can the TCs be described as those conditions which permit paths to be taken which are not permitted by the parameters of the formal model? Apart that, I can’t see what TCs have to do with tautologies. As far as I can see (still happy to be persuaded), Cochrane’s equation (1) is akin to an accounting relationship and is not a functional relationship (that is to say, where a variable X is the cause of variable Y). All it says is, Y looks like this from this perspective, as far as I am concerned. It’s like saying this box is made of wood but it does not explain the process of manufacture. If it did explain the process of manufacture it would explain how the individual wood components were shaped, how they were fastened together and with what, then you might be able to say something interesting like this is a box because it has integrity and it has integrity because of the fastening system and because it has integrity it can be used for containing other items of weight. Anyway, rabbitting on too much.

Cochrane’s exegesis begins with equ. (1) then jumps to (2) without showing the intervening steps viz.:

(Bt-1/Pt)(Pt-1/Pt-1) = RHS

Which leaves the LHS effectively unchanged as Pt-1/Pt-1 = 1.

He then switches the LHS around a bit to:

(Bt-1/Pt-1) (Pt-1/Pt)

Leaving the LHS effectively unchanged.

Then he separates the expected and unexpected parts of (1). Now you say in your post that (2) is the expected part and the unexpected part is (3). I presume (2) is such because it has the Et operator and the (3) the has the Et-1 operator which refers to a time in the past (if this is correct, why have an expectational operator?) (BTW, I can see (1) = (2) + (3).)

Next, he then goes on immediately to conclude that equ. (2) shows us:

“Unexpected inflation is determined by expectation of future surpluses.”

Looking at the statement itself, it seems to be absurd enough.

However, the unexpected inflation term is this part of equ (2) I presume (?):

(Et – Et-1)(Pt-1/Pt)

(The price ratio part is the price index I was talking about in my earlier post.)

And the expected future surplus is the RHS of (2).

So he begins with a non-functional relationship (as far as I am concerned), multiplies the LHS effectively by 1, jiggles the price terms around, in the process generating a price index (i.e. a measure of inflation) and then concludes that the inflation is a function of future surpluses.

I find this all absurd.

Then he goes on to make what I believe is an even more absurd conclusion, i.e. that expected inflation can be determined by bond sales while leaving surpluses unchanged. He is in effect taking the LHS of of (3) (I presume) and saying if I increase the bond value term I can reduce inflation by the same percentage, keeping the LHS unchanged and still equal to an unchanged RHS.

To me this is all nonsense.

Perhaps you can disabuse me of this view?

BTW, I tried to print out your Information Equilibrium paper. It seems to get stuck at page 27 for some reason. Can you check this. Thanks.

DeleteHenry

Jason,

DeleteUsing your dimensional analysis approach would it go like this for equ. (1)?

LHS = $/($/G) where G is a good or goods

=> LHS = $ X G/$

For the RHS, firstly:

Beta = 1/(1-r)

r = $/$

=> Beta = $/($-$)

So:

RHS = $/($-$) X $

So:

LHS not= RHS

Some quick answers:

DeleteRegarding the paper, pages 27 and 28 have some complex graphics that take awhile for printers to process. It takes awhile on my printer too.

Regarding the units of beta: it should be dimensionless (it is a ratio of the utility at one time to utility at another), but r is a rate (units of 1/time). However, in Cochrane's presentation time units are all relative to the base time unit. So the rate r is over 2 years and each time step is 2 years. It's kind of a mess, really. Overall, it does work out, though.

I will look more carefully at your comments and get back later.

Jason,

DeleteJust to clarify what might be a minor confusion in my re-explanation post (although I'm sure you will see thru my slip), when I said:

(Bt-1/Pt)(Pt-1/Pt-1) = RHS

I didn't mean that (Bt-1/Pt)(Pt-1/Pt-1) is the RHS. I was just being lazy as I couldn't be bothered writing out the whole of the RHS. Anyway, I probably should have written for consistency with the rest:

LHS = (Bt-1/Pt)(Pt-1/Pt-1)

Jason,

DeleteYou're probably sick of my posts but just so I can outline where I was going with my post to John Cochrane, I'll repeat here again hopefully with more clarity.

Cochrane's equ (1) is analogous to the following:

weight of box of apples = sum of weights of individual box of apples.

Multiplying the LHS by a ratio = 1:

weight of box of apples X price of fish/price of fish = sum of weights etc.

Rejigging the ratios while unchanging the LHS:

weight of box/price of fish X price of fish = sum of weight of apples

Then draw the conclusion that:

the price of fish is a function of the sum of the weight of individual apples.

To me this is what Cochrane has done with his first few equations and conclusions. Which to me is patently absurd.

H.

still stuffing up, first equation should read:

Deleteweight of box of apples = sum of weights of individual apples in box

One thing that surprised me reading economics blogs is that, despite the reliance of economists on math, the ones I have read do not seem to think like mathematicians, even if they can do the math they use. My impression is not that they are careless about taking limits, it is that they do not take limits at all, but simply assume that we have reached Neverland. I think that Jaynes got it right, at least for scientific purposes. We should build finite models and take them to the limit.

ReplyDeleteMathematicians have not always been aware of the problems with jumping to the limit. My guess is that economists think as they do because they have adopted a 19th century viewpoint, by which jumping to the limit was not considered problematical. We have a succession of economics professors teaching what they were taught, going back to the 19th century.

However economics didn't adopt a lot of math until the middle of the 20th century (there was some before that, notably Fisher's 1892 thesis) well after the mathematical issues involved in limits and infinitesimals had been largely resolved.

DeleteSo I'm not sure what gives ...

I just read through a couple of papers that immediately start out with summing time periods to infinity of a term containing a discount factor. Of course that only makes sense if there is a scale for the discount factor. Gonna post on this ...

OK, so there's not the laying on of hands. ;)

DeleteI'm not sure what gives, either. I do know that psychologists still did experiments with memorizing nonsense syllables in the mid-20th century, harkening back to a 19th century mindset. And a lot of economics smacks of the 19th century to me.

I wouldn't really disagree about that :)

DeleteDoes this really break down any important model?

ReplyDeleteI'm not sure how to interpret the notion of a "scale". In the case of time, it seems intuitively relevant to include it (specially in the case of Romer's model of knowledge arrival). But numbers in economics generally don't measure anything, or rather, they describe a number of discrete given entities (e. g. the number of firms in a Cournot competition problem). Is a measuring rod necessary in this case?

It's very strange that a problem like this went unnoticed. There's certainly a disconnect between mathematical physics and mathematical economics (you already know this), but at least one very renown economist had supposedly some contact with physics (Samuelson), and he's partly responsible for the introduction of mathematics in economics. Perhaps economists didn't think it was relevant in the 1950s and 1960s, and the younger crowd forgot it.

These problems can sometimes be fixed without a lot of trouble, so maybe it is just me being nit-picky. It does kill Paul Romer's mathiness issue with Lucas and Moll, though (since the double limit doesn't make sense). I am still looking for a well-known case where it is irreparable in a real result. In this case, I think it helps make *sense* of what is going on ... make it more intuitive:

Deletehttp://informationtransfereconomics.blogspot.com/2015/11/temporal-shapes-of-discount-factors-and.html

I think the cause was a shift to models of time periods t = 1, 2, 3 ... which implicitly eliminates the time scale (scale means kind of the same thing as units -- meter is a unit and 1 meter is a scale). It's a change of units where your relevant time scale is now equal to whatever time Δt is. If Δt is one quarter or one year, then the time scale is one quarter or one year. But then it doesn't make sense to take Δt to zero unless you re-introduce a time scale (as I do in the link above).

And I agree that a lot of unitless things (dimensionless pure numbers of kinds of widgets or firms) don't always need a scale, especially when they represent numbers of dimensions (e.g. 2-good Edgeworth box can be taken to be n-good box and n >> 1 without issue).

I should add that if your infinite sums over time have some implicit scale Δt, then you can't willy-nilly add new effects to a model that have their own scales that aren't order 1 multiples of the "fundamental" scale Δt. Nick Rowe's version adds in a new time scale for government action in an NK model that is ~ Δt, but then says expectations are formed dt << Δt (since Δt is the only scale in the NK model he shows, expectations must form on a scale ~ Δt also).

Delete