As the comment thread on the previous post has gotten a bit out of hand, I thought I'd put my mind to distilling the essence of the argument I was making. I managed to come up with a pretty nice illustration of what is going on.
First, I am not saying SFC is the only tool of Post Keynesian economics. However it seems to come up a lot. Just sayin'. Among their weaponry are such things as ...
Second, I am not saying this is an insurmountable issue. However, it does mean that stock-flow consistent (SFC) models have implicit assumptions. And the solution is to state those assumptions. But from what I gather, stating those assumptions will go against the "accounting identity" philosophy of SFC models.
Basically, the point is that equation (1) below doesn't pin down the path of $H$ unless the time step $\Delta t$ is considered to be the smallest possible time step.
$$
\text{(1) }\;\;\; \Delta H = G - T
$$
The left hand side (LHS) is a constraint on two points one time step apart. The RHS is effectively a rate integrated over time. The problem is that the constraint on the LHS is insufficient to specify the function on the RHS. There are actually an infinite number of paths (at shorter time steps) that are consistent with $\Delta H$. Here's a picture:
Let's say $G - T$ is the blue line. Well, that is just one way to realize the path that has change in $H$ equal to $\Delta H$. These other paths violate the "accounting identity" view of equation (1), but are actually consistent with equation (1). This is related to the fundamental theorem of calculus (and in higher dimensions, Stokes theorem): the integral of any function with an antiderivative that goes through those two points is the same.
The curvature degree of freedom used by the red lines is the "time scale" I referenced in the previous post. There has to be some scale for an observable function to have a non-trivial dependence on time.
Basically, accounting doesn't specify the path since many functions of time will have the same endpoints. This should be obvious: my bank balance last year was €50, my bank balance this year is €100. Did I spend just €50? Maybe I made €1000 and lost €950. There are actually an infinite number of possible paths that satisfy these endpoints.
The way to fix this is either to 1) define H in terms of $G - T$, in which case, it's not accounting, it's a definition (there is no independent thing called "high powered money", H), or 2) say the instantaneous growth rates of H and the integral of $G-T$ are equal (this will relate to the information equilibrium model in the next post). There is a third way I'll describe below.
There's another fun analogy. Let's take $H$ to be displacement $S$. We define $G - T = \gamma \Delta t - \xi \Delta t \equiv \beta \Delta t$ with $\beta = \gamma - \xi$. This $\beta$ is velocity. Our equation (1) is then:
\Delta S = v \Delta t
$$
This is true ... if velocity is constant. However, if velocity is a linear function (i.e. constant acceleration) we have
\Delta S = \frac{1}{2} a \Delta t^{2} + v_{0} \Delta t + S_{0}
$$
The accounting identity view depends on $v$ (and thus $\beta$) being constant, but they're not. In general, we have
\Delta S = \int \; dt \; v(t)
$$
Or for the SFC model system:
$$
\Delta H = \int_{t}^{t+ \Delta t} \; dt' \; \gamma(t') - \xi(t')
$$
Only if $\gamma$ and $\xi$ are constant, do we get
$$
\Delta H = \gamma \Delta t - \xi \Delta t
$$
That gives us a third way to specify the implicit model assumption: that the rate of change of $G - T$ is constant. This isn't true in general, or even in the way the model works out numerically in the previous post. But it's a way you can fix the SFC framework.
This is clearer. Thanks.
ReplyDeleteIn other words:
ReplyDelete1) Jason Smith has discovered that financial accounting (not just SFC, by the way) involves discrete time series.
2) This is only a problem for mathematicians, not well versed in financial accounting, who think the financial world can be modeled in a continuous time series (which it can't).
End of story.
You are still missing the point.
DeleteThe "accounting identity" $\Delta H = G - T$ has a degree of freedom (a time scale) that isn't set by "accounting". It's not because accounting "involves" discrete time series. It's because the equation constrains the end points, not the path. It doesn't matter if that path is discrete or continuous.
This is logically independent of discretization. The same thing is shown both in continuous time (last half of post) and discrete time (first half).
Your misunderstandings are even more basic than I originally thought. You need to take a basic accounting course.
DeleteThe whole point I've been making all along is that these implicit assumptions are well known by the modelers. They are, as Tobin described 35 years ago, "unrealistic abstractions". This is how accounting is done. And it doesn't mean it's wrong or idiotic as you would say. You just have to understand how it's being done, which you have now clearly proven, you don't.
For instance, in your example above, if you specify the discrete time period and the source of flows during that time period then the flows tell the story about "where" and "when" your stock of extra €50 came from. That is the whole point! There's nothing inconsistent about it unless you do the accounting incorrectly.
And let's be clear - this isn't about SFC. This is a failure to understand accounting 101.
Accounting 101 is apparently unable to handle a scenario with changing flows consistent with basic calculus.
DeleteIs there any calculus in accounting? Besides the derivation of the continuously compounded interest formula?
Sorry, my first summary was wrong. Should have been:
DeleteAccording to a physicist who writes a blog about an untested economic theory, the entire field of accounting is wrong. EOM.
It's not wrong, it just doesn't pin down the behavior as much as you think it does.
DeleteThis didn't do anything for you:
This should be obvious: my bank balance last year was €50, my bank balance this year is €100. Did I spend just €50? Maybe I made €1000 and lost €950. There are actually an infinite number of possible paths that satisfy these endpoints.
No one ever said accounting was magic.
DeleteAnd yes, when you construct an example where you don't explain the flow of the €50 then it "doesn't do anything for" anyone. The difference between your example and the real world is that we actually know where the flows come from. And that's the whole point of "accounting", a word which means:
"the systematic and comprehensive recording of financial transactions"
When you intentionally leave out the recording of transactions you're performing voodoo, not accounting.
"Only if γ and ξ are constant, ..."
ReplyDeleteNice shift.
So is
Delta H = G minus T or not?
What shift?
Delete$\Delta H = G - T$ doesn't have an extra degree of freedom (an implicit scale) if your time step is the smallest possible time step.
So you have a choice:
1. Make the implicit time scale explicit (e.g. adding a factor out front)
2. Assume your time step is the smallest increment of time
Of course, 2 is equivalent to 1 -- the smallest increment of time becomes your explicit time scale.
Shift was shifting in your views.
Delete"Make the implicit time scale explicit (e.g. adding a factor out front)"
It is always written in SFC models that the time periods are such and such. There's no factor on front. Keep making accounting errors!
"As the comment thread on the previous post has gotten a bit out of hand,"
ReplyDeleteYou can probably look at choosing to keep comments unthreaded in blog options. It's easy for everyone to keep track of what's happening.
Meanwhile some of us are still wondering why this issue, or SFC in general is so important.
ReplyDeleteMaybe I'm too cynical...
This isn't about SFC. Jason is actually saying that all financial accounting is essentially wrong. And if the readers here think that accounting is a misguided way to understand finance and economics then you should tell Wall Street, Central Banks and every other entity in the world about how they've been doing everything wrong for the last 500 years.
DeleteFine, let me rephrase:
DeleteJason, why the heck are you still talking about PKE?
Is that direct enough for you?
This has nothing to do with PKE. He's not actually talking about PKE or SFC. He's talking about accounting 101. He's saying that accounting is wrong. He's actually saying that one of the oldest time tested and most commonly used financial modeling tools in the history of humans is wrong. Ha.
DeleteIt's called March Madness ... and this year's topic is SFC and PKE! I'm still writing that post about how this can be described as an information equilibrium model ...
DeleteSamWatts:
Calculus was invented under 400 years ago, so if the relevant accounting is 500 years old ...
FWIW: It's not that accounting is wrong. It's that accounting doesn't constrain an SFC model as much as you think it does.
DeleteDouble entry bookkeeping has been around for over 500 years. Maybe one of these days you'll look into how it works. :-)
Delete"This year's topic is SFC and PKE!"
DeleteA whole year?! I was hoping this dismal endeavour would end in a couple of weeks...
Otherwise, I don't care if anyone thinks this is about accounting. As long as the post has SFC in the title, it's about PKE.
As I have now said three times: this isn't saying accounting is wrong, it just doesn't specify as much as you think it does.
DeleteE.g.
This should be obvious: my bank balance last year was €50, my bank balance this year is €100. Did I spend just €50? Maybe I made €1000 and lost €950. There are actually an infinite number of possible paths that satisfy these endpoints.
It's just this year's "March Madness" (TM).
DeleteIt seems accounting is just rotation matrix.
(That sound you just heard was SamWatts's head exploding.)
It looks like the people criticizing SFC and PKE don't actually know what either of them are. So let's give this a try since you guys are still taking baby steps on this stuff:
Delete1) SFC ≠ PKE. Stock flow consistent models were created by Morris Copeland in the 1940s and are used by Central Banks and financial institutions everywhere. This had nothing to do with PKE.
2) SFC is just a model that coherently integrates all stocks and flows of an economy. When someone says SFC is wrong they're actually saying that basic accounting constructs are wrong. In other words, they're saying that balance sheets and income statements are wrong. (This is worthy of an actual LOL, by the way).
3) PKE uses SFC models, but PKE did not invent SFC models. After all, SFC models are just basic stock-flow constructs utilized by most financial and business entities within the economy.
Are we beginning to get the basic facts straight now?
Oh, thank goodness. It will be over in three short weeks...
Delete"Double entry bookkeeping has been around for over 500 years. Maybe one of these days you'll look into how it works. :-)"
DeleteTruer words were never spoken here.
But more important than accounting, there are also basic mathematical errors made: errors due to confusions of not being able to go back and forth between difference equations and differential equations.
DeleteAh, but Ramanan, I demonstrated here that with G&L's SIM, changing sample periods from T1 to T2 is NOT accomplished by scaling alpha2 by T2/T1, as you thought (T1 not explicily identified in G&L as such, but clearly it's equal to 1 "period" from examining their results table) . The correct answer has precisely to do with properly moving between discrete time and continuous time and vice versa.
DeleteSorry Tom, didn't catch your point.
DeleteYou probably didn't understand my point.
I choose a period such as a quarter. I set alpha2 to 0.1.
If instead I choose a year as the time period, then I have to change alpha2 to 0.4.
The G&L simulations work well with this.
But I demonstrate on my previous spreadsheet (link at the top of the page I link to in my comment above) that doing that scaling you mention does not work. It gives the wrong answer for H. I explain the proper way to convert on that page I link to above (in the blog text beneath the embedded spreadsheet). I'd repeat it here, but I'm on my phone. Check out the link I give above and it tells you precisely where scaling alpha2, as you recommend, goes wrong.
DeleteSo to summarize, I have two embedded spreadsheets implementing SIM, both accessible through that link above: the link is to the 2nd one which adjusts for changes in sample period properly, and describes what I'm doing below the sheet. The 1st spreadsheet (available through a link at the top of the 2nd) tries it your way and demonstrates the problem (incorrect values for H) if alpha2 is scaled like you describe.
The reason is clear once you put the problem in the standard state space model form, which I do. So in other words the reason is clear from the mathematical expressions I provide, not just from trying it out on the interactive spreadsheets. Give it a look.
Tom, the problem with your spreadsheets is that you don't adjust G to changes in t. When you do that it works perfectly.
DeleteTom,
DeleteYet to look at your spreadsheet.
I don't get why you have to go through all this in a complicated way than simply solving it.
I think your confusions are the same as Jason.
I mean there's no checking by calculation.
There's just a check by conceptualising.
Define the time period to be one quarter. let's say time lag is 16. That's 4 years.
If instead the time lag is one year, alpha2 needs to be adjusted and you have a mean lag of 4 which is 4 years.
I mean I do not understand what you are trying to achieve.
Morover, the results can be presented in analytic form, so if there's a mistake, it's in your spreadsheet.
AH the *curvature* of the adjustment is still dependent on the time scale.
DeleteA H, no it doesn't. Try it out for yourself. If you scale both G and alpha2 by T2/T1 you'll note that you get the wrong answers. This is easy to see in that the output measurements (Y,T,YD,C) are also dependent on alpa2. Right now you have to download it to do that because I didn't provide an easy way to change G except in the 1st few cells.
DeleteTom,
DeleteRamanan and SamWatts aren't interested in engaging with this. They think it is accounting.
SamWatts comment about where the 50 Euro came from is evidence of not grasping the concept.
The "integral" of the constant 50 over a time step of 1 is 50. So is the integral of 950 - 900. The equation delta H = G - T doesn't specify.
The fact that Ramanan is using a rhetorical attack using a mistake I admitted (and wasn't relevant) makes it clear: this is just political garbage.
I'm with Noah now. PKE is drivel.
Ramanan, I challenge you to demonstrate what you're saying. Try it for yourself. I can even show you how in steady state I'm correct analytically. The reason I went through that "complication" is precisely to avoid the error you make. SIM can be reduced to:
DeleteSystem or dynamic equation:
H[n+1] = A1*H[n] + B1*G[n+1]
Measurement equations:
Y[n+1] = CY*H[n] + DY*G[n+1]
T[n+1] = CT*H[n] + DT*G[n+1]
YD[n+1] = CYD*H[n] + DYD*G[n+1]
C[n+1] = CC*H[n] + DC*G[n+1]
Where
A1 = 1 - θ∙α2/(1 - α1 + θ∙α1)
B1 = 1 - θ/(1 - α1 + θ∙α1)
CY = alpha2/(1-alpha1*(1-theta))
DY = 1+alpha1*(1-theta)/(1-alpha1*(1-theta))
etc... It's all right there in the table at the bottom of the spreadsheet, try it out.
You can use those expressions to analytically determine what happens when you try
1) Ramanan's idea: scale alpha2 by T2/T1
2) A H's idea: scale alpha2 and G by T2/T1
It's easy to see that you will not get the correct answers in steady state for H, or Y (You can check the others measurements T, YD and C as well, but I don't feel like copying the formulas over... they get messed up on the way).
But not only that, even if you managed to make a bunch of scaling changes to fix it, you still do not get the exactly correct answer, only an approximation. The correct answer requires use of a proper transition matrix, which involves exp(a*T). So, the CORRECT way to scale from T1 to T2 is as follows:
A2 = A1^(T2/T1)
B2 = B1∙(A2-1)/(A1-1)
Again, I explain why under the spreadsheet.
1st spreadsheet implementing Ramanan's method.
2nd spreadsheet implementing the method I describe above.
And if you don't believe me, I encourage you to read this short reference on the subject. It covers all you need to know about transition matrices and how they are used to determine discrete time A and B matrices in the first 4 or 5 pages.
"Ramanan, I challenge you to demonstrate what you're saying. Try it for yourself."
DeleteLulz.
Mean lag as per Appendix of Ch3 of G/L is inveresely related to alpha2.
Suppose you change the time step from a quarter to a year.
Alpha 2 also changes by a factor of 4 because consumers consume 4 times out of their wealth in a year compared to a quarter.
Mean lag changes by a factor of 4, which is what is expected because of change in timestep.
That's all there is to it.
I don't know what you are achieving with your spreadsheet.
As for your point about scaling and the complications because of something analogous to compounding ... that's just a minor point dude.
For steady state analysis, just get rid of the sample indices. So say starting out with sample period = 1 year, we have:
DeleteH = A*H + B*G
Y = CY*H + DY*G
Thus
H = B*G/(1-A)
Now see what happens if we use
1.) Ramanan's method: scale alpha2 by T2/T1 = 1/4
A was (1-alpha2*X) and goes to (1-alpha2*X/4)
H = 4*B*G/(1-A) =/= B*G/(1-A) ... it changed!
2.) A H's method: scale alpha2 and G by T2/T1
H = B*G/(1-A) ... so far so good, but then look:
Y = CY*H/4 + DY*G/4 =/= CY*H + DY*G, ... it changed!
And that's just the steady state. Like I say, a straight scaling can only give an approximate answer, for the same reason that with compound interest, you don't pay 1/4 the amount you would during a full year if you make payments for only 1/4 a year. It's not a linear scaling.
It's actually must easier to just do the resampling properly, as per that reference I give above.
DeleteTom, If your model doesn't adjust G to t I don't know what it is trying to show...
DeleteJason, Yes the curves will be different because alpha2*H introduces compound growth. It is well known in accounting that compounding over different periods are not equal. This is why there are laws about how credit cards present APR.
In the model alpha2*H affects the growth rate of H through T, so shorter time periods actually cause the growth of H to slow. This is what happens in Tom's spreadsheets if you adjust G correctly.
A H, I DID try adjusting G just like you said, in both the steady state analysis and numerically: it didn't work. Unless I've made an amazing pair of offsetting mistakes in my formulation of the discrete time system coefficients in terms of alpha1, alpha2 and theta above, which just happens to give me all the results I'd expect, I'm not sure how you get around Y being different in steady state if you scale both alhpa2 and G by the same amount, as I point out in my March 7, 2016 at 9:20 AM comment.
DeleteNow, it's *quite possible* I did make such a pair (or more) of those errors, and they just happen to offset each other perfectly most of the time, but *I think* it's unlikely. Check it out for yourself. I'm happy to admit I didn't do the algebra on alpha1, alpha2 and theta correctly if that's the case. But the general principle for using A = exp(a*T) to convert back and forth from discrete time to continuous time (and thus to resample with different sample periods as well) I'm sure about. A 1st order difference equation suggests a sampled exponential continuous time system, which doesn't depend on any sample periods (obviously). Thus to sample this continuous time function, the exact expressions are, for A and B:
A = exp(a*Ts)
B = (A - 1)*b/a
if the continuous time system is:
dh/dt = a*h + b*g
where H[n] = h(n*Ts) and G[n] = g(n*Ts) and Ts = sample period. This makes the system invariant to all different sample periods Ts.
I will add a control to both spreadsheets so you can change all the values of G easily.
But again, take a look at my steady state analysis. Do you see a problem? It shows that if I do what you advice steady state Y changes. Did I make an error? When I try it out numerically, I get the same result.
In terms of changing G to different units (e.g. dollars/quarter instead of dollars/year), you can certainly do that, but if you do, you have to adjust B accordingly (+ other easy changes to "D" values). That's just a matter of putting your exogenous input (G) and your outputs (what I call "measurements") in the units of your choice. It's not a difficult procedure, but it's unnecessary for getting the right answer. I could always express G in terms of pennies per February no matter the sample period if I wanted, as long as I accounted for that in B and the D constants (DY, DT, DYD, DC, as per above).
What I mean by "adjusting" B and the "D" constants is to scale them by the inverse of what you scale G by to get it into the units you want.
Delete... "invariant to all sample periods" ... I'm assuming there that g(t) is held constant over a sample interval. If it's not, you cannot use a constant B, but instead must integrate. See the 2nd equation with an integral sign on page 3 here.
DeleteI don't really understand what you mean by correct, the steady state Y for one period is 100/per 1 period, the steady state Y for 2 periods is 200/per 2 periods, these are the same values.
DeleteA H, Ah, OK. You're assuming the units for the inputs and outputs change each time you change the sample period. That's easy to change in my model by simply scaling B and/or the "C" constants and/or "D" constants accordingly.
DeleteSo, yes, I could do that. However, that doesn't change the method for adjusting A and B so as not to alter the natural frequency (the time constant) of the corresponding continuous time system.
Just straight scaling alpha2 may be a decent approximation most of the time if those other units are changed. What you miss by doing that is (at least) the "compounding effect," which could be small.
It's that second order "compounding effect" that's at issue. You can make all kinds of changes to deal with it.
DeleteOr you can give up the idea that it's "accounting" and rather a definition or an assumed time scale.
"It's not a linear scaling."
DeleteFine. Even if you use differential equations, you end up with the same mean lag theorem, which is given in the appendix.
I'm not assuming the units, any flow variable by definition has Money/time period as unit.
DeleteYou should make add the flow and stock matrix from Godley and Lavoie to your model and this should be obvious to you.
... but not always: for example if you had something (call it x) that grew at 200% a year, then after 1 year you'd have 3x of it. But if you try a straight scaling of the rate to a Ts << 1 year, you'd get about 7.4x, a difference of 4.4x
DeleteI think I did that example correctly:
x+2*x (compounding yearly)
x*(1+2*Ts)^(1/Ts) (compounding for Ts < 1 year)
x*exp(2) (compounding continuously)
This comment has been removed by the author.
DeleteJason - the compounding is a behavioral, the accounting is fine either way.
DeleteA H & Ramanan, A H's comment above made me realize why I had trouble with the alpha2 scaling method of changing sample periods. I updated my 1st spreadsheet accordingly, and now it works pretty well for Ts not to much bigger than 1 (and smaller, of course). I explain the change in "Update 2" at the top.
DeleteHowever, that method of changing the sample period does change the dynamics of the system when Ts gets to be about 10 or larger, whereas the method I use on the second spreadsheet does not change the system's dynamics at all. Give them a try:
Old spreadsheet, now with a fixed alpha2 scaling method.
New spreadsheet, for comparison (still works fine).
Try both with Ts = 0.5 and you won't see a difference, but then try them with Ts = 10 and you'll see a big difference.
In order to take larger time steps, you need to *increase* the value of alpha2. Unfortunately, alpha2 is restricted to be less than alpha1 by the model (and both must be less than one).
DeleteSee Eq. (3.7) in Godley and Lavoie.
The values of alpha1 and alpha2 are later constrained in the text such that
(1- alpha1)/alpha2 = 1
called the "stock-flow norm".
It's a totally incoherent way to deal with the changes induced by changing the time step -- it violates the model assumptions and the rest of the analysis in G&L.
Jason, good catch regarding the restrictions on alpha2. Now that you mention it I do recall the restriction from the table you reproduced from G&L in your previous post:
Delete0 < alpha2 < alpha1 < 1
In contrast, when I use the method of spreadsheet 2 to change the time step I don't need to change alpha2, and thus it seems to still work fine. Do you agree that it should?
It seems to effectively re-scale ΔH ... i.e. exactly the effect of adding the Γ factor. Which is what I was saying.
DeleteBut yes, basically figuring out the continuous limit and then keeping the parameters consistent with that seems like a valid method -- I didn't any issues with the implementation.
DeleteJason, since the way I did the spreadsheet didn't require G or T as internal variables, I kept them as annual rates (at least on the 2nd spreadsheet). Now I expected the ΔH curve to change with Ts: it's the only one I would expect to do that, since the amount accumulated per Ts would change. However, I just checked out what you said by adding another column to calculate Γ. Since my Ts*G is your G and my Ts*T is your T, the expression is Γ = ΔH/(Ts*(G-T)), where ΔH[n] is calculated as H[n] - H[n-1]. Sure enough, you're correct! When Ts=1 I have Γ=1, but for any other value of Ts, Γ =/= 1. You have to scroll to the right to see the Γ column (and I zeroed out the 1st cell since I didn't like looking at a divide by zero error). Interesting!
DeleteI thought ΔH should scale with Ts (the only curve I have that should do that), but I wasn't sure about Γ =/= 1, but sure enough, you're right! I added a plot of it.
DeleteI added a column for Γ (scroll the right). My Ts*G is your G and my Ts*T is your T (since I always interpret G and T as rates in per year units)
Regarding my plot of Γ that's only =1 when Ts = 1... I suppose an SFC modeler would say that means my method for changing sample rates is invalid since doing so leads to ΔH =/= G - T. But looked at another way, could you say that forcing ΔH = G - T implies that compounding is calculated only at the sample times (and never in-between)? Hmm, I'm not sure ...
DeleteSince my G, T, Y, YD and C are really all rates I bet one solution for keeping gamma=1 over all time intervals is to use their integrals over the time interval. That is:
DeleteΔH = integral over Ts (G - T)
Which specifies one of the paths you speak of in the post. I'll give it a try and see if that works. My goal here is to satisfy all their equations while remaining truly sample period invariant.
Ok, I think I've got it. With the parameters given, and Ts=1, the system has a time constant Tc=5.986085297 compared to Ts=1. So that explains my Γ for small Ts:
Deletelim Ts->0 Γ(Ts) = 1.0858 = (1/Tc)/(1-exp(-1/Tc))
... the ratio of integrating unity over that fraction of a Tc to integrating a decaying exponential (G - T) over that fraction of Tc. So when you write:
"That gives us a third way to specify the implicit model assumption: that the rate of change of G−T is constant."
That's what's going on. So to make a model with
ΔH = integral from t to t+Ts (G−T)*dt
with G and T as rates (G constant and T = G*(1-exp(-t/Tc))) work out correctly I have to include a Γ as you say. That way I both get true sample period invariance (fixed Tc for all Ts) and I match their model at Ts=1. Interesting!
Either that, or I have a yucky piecewise constant T with stair steps equal to 1 in width (for all Ts), or some even worse mess.
"SFC ≠ PKE"
ReplyDeleteRegardless, any post in an economic thoery blog containing SFC will probably be about PKE. Plus, Jason seems to agree that that is his general topic of discussion for the next month.
My exasperation is valid.
Jason,
ReplyDeleteSo what happened to ΔΔH = ΔG + ΔT?
Henry
Count your blessings that it disappeared Henry! Lol
DeleteI'd say it's Jason, his fingers crossed, hoping that no-one notices. :-)
DeleteH.
"So what happened to ΔΔH = ΔG + ΔT?"
Delete+1
A score?
DeleteTo whom then?
H.
"A score?"
DeleteWas that for me H?
+1 is just a Twitterish way of saying "same here"
Henry, I answer your questions you left for me here:
Deletehttp://banking-discussion.blogspot.com/2016/03/answer-for-henry-1.html?m=1
That equation represents the curvature of the red line segments directly.
DeleteThere is no difference in the analyses.
One shows it with the first and second derivatives. This one shows it with the first derivative and integral (antiderivative).
I guess it's my fault for talking about calculus on an Econ blog. Unfortunately this issue needs calculus to understand.
I tried a picture in the post above. Specifying the end points doesn't specify the curve between them.
It doesn't specify the curve because it doesn't specify the slope of the line at each point (curvature).
Why is this hard?
What is so confusing?
I showed in the previous post that you could add a factor out front of Equation (1) and it changes nothing except the rate of approach to the final state.
Just read a calculus book. Or find some notes on the Internet. Specifying two points does not specify a curve.
"Just read a calculus book."
DeleteLulz. Everything you accuse others of applies to you.
I mean your confusions got nailed when you wrote something as stupid as
ΔΔH = ΔG + ΔT = ΔH
How is ΔΔH = ΔH?
Jason,
DeleteI understand the your diagram.
What about my little bit of facile algebra in your previous blog which showed that the terms in this equation were of the form:
H/Δt, G/Δt, T/Δt.
They don't look like derivatives.
"They don't look like derivatives. "
DeleteOr even difference equations.
Do I have this wrong?
H.
"That gives us a third way to specify the implicit model assumption: that the rate of change of G−T is constant."
ReplyDeleteComplete bonkers.
Where is it written in G&L models that the rate of change of G-T is constant.
Perhaps you are yet to appreciate the beauty of the models in which it is derived that G-T is endogenous.
You still don't get it.
DeleteI tried. It's really up to you now to show some good faith attempt to understand it.
Or don't. Remain blissfully unaware that you've made an implicit assumption.
This is just convincing me that people who use SFC don't really know what they are talking about.
Hahaha. Learn some accounting dude.
DeleteAnd some elementary mathematics too.
"Remain blissfully unaware that you've made an implicit assumption."
DeleteIt seems to me they accept this and they have their own way of adjustment which is similar to yours (which I thought I demonstrated with some facile algebra).
You keep flogging the dead horse.
H.
Jason,
ReplyDeleteI ask you again, has this really got anything to do with SFC specifically?
Take any DSGE paper written in discrete time and you will get similar budget constraints. For example, the most recent paper I find on https://nepdge.wordpress.com is this: http://www.rieti.go.jp/jp/publications/dp/16e013.pdf. Equation 2 is a government budget constraint of very much the same form (albeit with more variables).
I can see what you're saying, but I don't think it matters because that's not the way these models (SFC or anything else done in discrete time) are conceived. In [deltaH = G-T], it is G that is exogenous, not some function that describes the path of G over time. Discrete models are not intended to represent discrete measurements of continuous functions. They are something different altogether.
DSGE models are log linear and that process effectively removes this degree of freedom since it specifies rates.
DeleteEffectively, it is the solution where the growth rate of H is specified in terms of G and T -- not just the two end points of a time step.
SFC models like the one above are linear so specifying the end points doesn't specify the path between them.
Unless! You say dt is the smallest possible time step. In that case the red lines are invalid in the graph above.
But that amounts to making the implicit time scale explicit... I.e. Equal to dt.
This is getting ridiculous. In your world the extra €50 just appears out of thin air. In the real world the SFC model will show where this comes from because it has to come from somewhere.
DeleteYou're doing voodoo, not math, not accounting. You are literally just making up "facts" as you go along.
I'm really not sure why anyone here cares. In addition to a few basic math errors, you've proven that you have no grasp of basic accounting.
Good bye.
I think we gave too much benefit to Jason by accusing him of bad accounting. It's even calculus that he's confused in.
DeleteWhat specifically is this confusion?
DeleteWithout that, this is basically just ad hominem.
Doesn't really help your case that this is an irrelevant technical issue that's already understood.
Hello Jason,
ReplyDeleteI wrote about this on my blog (BondEconomics.com) Any discrete time has an embedded time constant associated the sampling frequency (if you convert the discrete time dynamics back to continuous time). That is just an inherent property of the sampling operation. Since everyone is presumably aware of this, there is no need to specify it as an assumption.
Your complaint assumes that the "true" time domain for economic operations is continuous, without any upper limit to the frequency. However, since monetary transactions are normally settled at the end of the business day, the maximal effective frequency is daily, and that is probably a massive exaggeration for most economic activity -- very few workers are hired on a daily basis.
Brian, in terms of G&L's SIM model, I do that here.
DeleteBrian --- that just turns the adjustment process to the steady state into a finite step artifact. It's not really there because it depends on the size of the time step.
DeleteBut Godley Lavoie seem to think it represents something a real economy does. They have later cyclical adjustments. But those would be finite lattice effects as well.
That's a fine solution by me! But I don't think G&L (or the other PKEs) want to give up the time dynamics as an artifact :)
And single days are small enough to still have an issue. The adjustment process takes 20 days vs 20 quarters is a big deal!
Delete:)
Jason,
DeleteI have just caught up with the last three posts / comment threads, and need to read them again to take in the detail. I think that this is the best management summary:
https://www.youtube.com/watch?v=kQFKtI6gn9Y
Let’s go back to the first post where you said something that I imagine almost everyone on these threads would agree with:
“Post Keynesians are critical of mainstream economic methodology and seem to take a pluralistic approach. This is as it should be because there seem to be few empirical successes coming from mainstream economic methodology”
As I have said before, I have been reading economics blogs for about six years after many years of working as a consultant in business and government. I take a pluralist approach to economics. What does that mean though? I deliberately seek out variety. I read mainstream blogs, PKE blogs, MM blogs, your blog and others. I read blogs by academics and blogs by Wall Street practitioners. I read blogs about modelling and blogs about the more philosophical aspects of economics. I switch from blog to blog whenever I get bored of one perspective. I don’t comment on any blog until I have read a good number of posts and established where the author is coming from.
My consulting training tells me to focus fact-based research on specific questions. As I have said to you before, the first question I ever asked in researching economics was “how does the banking system work”? This seemed to me like a very obvious question, particularly in the current crisis. Banking and money are to economics what the heart and blood are to medicine. Who would trust a doctor who doesn’t understand how the heart works? Why would anyone trust an economist who doesn’t understand how the banking system works? I was astonished to find that mainstream economists could not answer this very obvious and very basic question. The best answer I got could be paraphrased as: ”we don’t know much about banking. James Tobin was our go-to guy on that subject. Unfortunately, he’s dead”. As a result, I looked further afield and found that only the PKE school had any sort of answer to my VERY FIRST question on economics. There is more to PKE economics, indeed to any economics, than mathematical models.
One requirement of a pluralist approach is openness to new ideas. An advantage is in seeing how people approach the same questions in different ways. Another advantage is in comparing answers from different perspectives. If Paul Krugman and PKEers reach similar policy conclusions from different starting points, that gives me more confidence that they are right. If they disagree then that makes me think harder about the difference of views in order to try to figure out who might have the better answer.
I have discussed this approach with you before using examples from business. Much business success comes from effective teamwork. Effective teamwork arises from understanding the point of view of each team member and figuring out how they can best contribute to improving the whole.
(cont’d)
You say that pluralism is one of PKE’s strengths. However, you seem to have made no attempt to understand PKE before taking pot shots at it. You don’t seem to have read (m)any PKE blogs or engaged in discussions with any PKEers by asking questions on their blogs. Your engagement with the PKEers in these posts is entirely confrontational.
DeleteIn your first post you said:
“Noah followed up with a Tweetstorm that really helped me understand what Post Keynesianism is -- it is basically activism in the mold of the "Chicago school", but from the left (so I am sympathetic)”.
However, Noah Smith doesn’t know anything about PKE. Steve Keen makes jokes about him in some of his presentations due to Smith’s complete ignorance of PKE. The very mild mannered Lars Syll was driven to calling Smith a troll on his blog a few days ago due to something he had said (possibly the same Tweetstorm). Noah Smith is the very opposite of a pluralist economist. I would remind you that this is the same Noah Smith who deletes your comments from his blog. I have no idea why you think that it is appropriate to form a view on PKE based on the uninformed views of a troll who doesn’t listen to anything YOU have to say on YOUR approach to economics either. At minimum, research with no established facts does not represent a scientific approach. It’s just bullshit.
I know that you are a very intelligent guy and I am interested in many of the criticisms of mainstream economics arising from your work. However, these last three posts have been complete car crashes, and the car crashes are almost entirely of your making. I’m not going to make detailed technical comments at this stage as you are not listening to what the PKEers are saying, so it would be pointless. I may make some detailed points after re-reading the posts and commentary.
One of the biggest flaws of this blog is that you think of the economy almost entirely in physics analogies. That makes many posts opaque to non-physicists. You should use real-world economics examples as that would be easier for others to understand but would also help you to see why physics analogies are not always appropriate. The discrete time versus continuous time issue is a good example. The economy is a sequence of discrete events and this can make summary totals very lumpy e.g. consumer spending may increase dramatically just before Xmas; government tax receipts may spike near specific deadlines. There was a good example of this in the UK around Q3 2012. The government claimed that their policies had caused an improvement in economic performance. In fact, it was the one-off spending spike from the London Olympics which had caused the improvement.
If you think that pluralism is a good approach, you need to learn to listen to others before assuming that they are wrong. What you don’t seem to understand is that both you and the PKEers are outsiders in economics. If you are not prepared to listen to PKEers about their approach, why should they or anyone else listen to you about your approach?
Jason has made a mountain out of a mole hill and apparently likes the view from the top but doesn't realize the mountain is made of sand.
DeleteHenry
Henry,
DeleteAs I said many times. This is a relatively small issue mathematically, but as I also said it's ideologically problematic for SFC analysis.
The proof is in the incessant substance-free attacks!
I'm done with this.
"it's ideologically problematic for SFC analysis."
DeleteWhy? Sure this is an important technical issue but it does not defeat the model. You've admitted as much. And you've got yourself in a tangle several times trying to wiggle yourself out of it with your updates, as you call them. Perhaps if you weren't so hairy chested about it early on you'd be showing us your IE version by now.
H.
Jason - adjusting the time step, so long as you avoid aliasing, will give rise to roughly the same dynamics if you translate back to continuous time. However, if you take too low a frequency, you start to lose information; the quarterly frequency is possibly too low.
DeleteHowever, it seems relatively obvious to most people that if you take a quarterly frequency, you will lose some information relative to a weekly frequency. That is, a sampled model will look slightly different. But so what? Any economic model is going to be an approximation of reality; that's pretty much a core part of post-Keynesian economics. It's only the mainstream that seems to believe it can proceed with "scientific" mathematical models, and even there, it's only a handful of zealots in academia that seem to believe that.
Really nice point Brian.
DeleteHenry
DeleteYou've mistaken attempting to explain something multiple ways for "wiggling". All of those updates stand. I made one mistake (which I admitted and documented).
If I say a sheep is an animal and then say a sheep is a mammal, I'm not wiggling out of saying a sheep is an animal.
"The proof is in the incessant substance-free attacks!"
ReplyDeleteYou started the attacks when you called this a "major flaw" and decided to declare that a very commonly used modeling technique was wrong. Now, after you've been exposed as making very basic math AND accounting errors, it's just a "relatively small issue". No, the only errors were in your analysis and your conclusions.
But lets' talk about the "flows" between these two insulting stock conclusions. :-)
You took it upon yourself to call PK economists and SFC modelers "stupid" and "dumb" along the way. But it turns out that the mistakes were of your own doing.
A little humility comes in handy when you've insulted a lot of people due to your own mistakes.
Apparently it is a major flaw because neither you nor Ramanan seem able to cope with it!
DeleteJust call "delta H = G - T" a definition. Or a model assumption.
If you can't do that, then it is a major flaw.
As a non economist, non accountant, and non mathematician layperson who is interested in all three, I am probably not qualified to comment. But I thought the calculus was meant to deal with dynamic situations whereas accounting only deals with monetary transactions, one at a time. Transactions are events that happen at a particular time, more or less. There is an interval or space between each transaction so transactions do not seem to me to be continuous processes. Thus I cannot see how you would apply calculus to accounting, but as a layman I am probably missing something obvious I suppose.
ReplyDeleteThat is basically the idea, but it's the other way around: accounting doesn't specify the model since it only deals with end points.
DeleteJason, check out the 2nd equation with an integral sign on page 3 of this set of charts. It ties into your point in this post: if u(t) changes between sample times, you must integrate using that formula rather than calculating a constant B (see pg. 4 for the expression for constant B (they call it "Bd")).
ReplyDeleteAnd of course that's assuming continuous time A is constant (continuous time A is just called "A" as opposed to their discrete time "Ad").
Text:
ReplyDelete"Basically, accounting doesn't specify the path since many functions of time will have the same endpoints. This should be obvious: my bank balance last year was €50, my bank balance this year is €100. Did I spend just €50? Maybe I made €1000 and lost €950. There are actually an infinite number of possible paths that satisfy these endpoints."
A bank account is an excellent analogy. :)
Let B = bank balance
let D = deposits
let W = withdrawals
Then we have
∆B = D - W
Suppose that I open my account with a deposit of ¥100,000. At this point I have that in my account. Since I started with ¥0,
B = ¥100,000
Also we have
∆B = D - W = ¥100,000
for the brief time period before and after my deposit.
Now, say that I withdraw ¥50,000. Now,
B = ¥50,000
and
∆B = D - W
holds for any time period since that of my initial holding of ¥0. :)
Now let's define
ΔB = Γ(D - W)
Let Γ = 1.2
Now when I open my account with a deposit of ¥100,000, we have
∆B = 1.2(¥100,000) = ¥120,000
and
B = ¥120,000
Later I withdraw ¥50,000, and we have
∆B = 1.2(−¥50,000) = −¥60,000
and so
B = ¥60,000
:)
Now let Γ = 0.5
Now when I open my account with a deposit of ¥100,000, we have
∆B = 0.5(¥100,000) = ¥50,000
and
B = ¥50,000
Later I withdraw ¥50,000, and we have
∆B = 0.5(−¥50,000) = −¥25,000
and so
B = ¥25,000
:)
Oh, yes. Suppose that I open my bank account by transfering ¥100,000 from another bank account. Then, regardless of Γ, I have to apply the equation,
Delete∆B = D - W
so that now
B = ¥100,000
The banks are in cahoots. ;)
Jason,
ReplyDeleteI’m not sure whether you want any more comments on this but I have a few technical comments.
Jason (in comments on this post): “Just call "delta H = G - T" a definition. Or a model assumption”
I agree that this equation is not a logical identity in the sense that it is possible to envisage scenarios where the equation does not hold. It might be better to call it a strong assumption. However, in practice it’s difficult to see why the government would choose any non-identity scenario.
I am assuming that the scope of this model is a country with its own currency. Wynne Godley was one of the first to warn of the potential problems of the Euro, so I presume that he would have had a separate model for a shared currency environment.
First, assume that G is greater than T. That means that, based on taxes alone, the government can’t pay all of the bills arising from its spending. The shortfall is G – T, so that is definitely the default scenario.
Now assume that your parameter is less than one. The government is NOT now creating sufficient money to pay its bills. That means that we are in a debt ceiling dispute / default situation. No sensible government would make this choice.
Now assume that your parameter is greater than one. The government is now creating sufficient money to pay its bills but is also creating some extra money. I’m not sure what the government would do with this extra money if it’s not going to spend it. Certainly, if it is raising money by issuing bonds on which it has to pay interest, it would be incurring excessive interest payments for no good reason.
Second, assume instead that T is greater than G. That means that the government is now removing more money from the economy in tax than it is returning via its spending. I’m not sure what a parameter other than one would signify in this scenario.
In summary, I think that you are right in theory that the equation is not strictly a logical identity but I can see why a practical economist would dismiss the non-identity scenarios as extremely unlikely.
Jason (previous post): “But then ττ is effectively a money multiplier. And money multipliers are anathema to Post Keynesians”
The money multiplier relates to the issue of money by commercial banks so I don’t think that it is relevant to this model. I don’t know what you mean here.
Hi Jamie,
DeleteIt's not just that ΔH = G - T isn't an identity, it's that if you let:
ΔH = Γ(G - T)
the only place the Γ shows up is in the rate of approach to the steady state. It has no other impact. That's why I called in an implicit scale -- it takes twice as long to reach the steady state if Γ = 0.5.
Basically, the "half-life" (time scale) of the approach goes as τ ~ Δt/Γ. Since Δt = 1 and Γ = 1 in the formulation in the model, tau disappears. But it's still in there. Implicitly.
This is actually tied to the money multiplier ... let's keep the original equation as H and define "H2":
ΔH = G - T
ΔH2 = Γ(G - T)
In this case, ΔH2 = Γ ΔH ... (think of e.g. H2 = M2 and H = base). That makes Γ an effective "money multiplier". Different values of Γ could be used to fit the empirical data -- specifically the approach rate τ.
If τ is long, then the multiplier is small. If τ is short, then the multiplier is large.
Just to be clear: The model is fine if one selects Γ = 1, i.e. τ = Δt and e.g. defines the time step to be quarters.
DeleteBut this amounts to an assumption about τ -- the rate of approach to the steady state (and really, *only* that as far as I can see).
If you want to say the economy adjusts to changes in government spending over a few quarters: fine with me! Just say that. But don't say it's "accounting", though. Say it's a choice of τ. But realize τ = 1 is not required by the model. It achieves the same steady state regardless of τ ... i.e. the same steady state as when τ = 1 and "accounting" holds.
I made those pictures on the previous post -- but really, if one says ΔH = Γ(G - T) doesn't make sense with the accounting, then why does it achieve the same result as
ΔH = G - T that ostensibly does make sense with the accounting?
Jason: “This is actually tied to the money multiplier ... let's keep the original equation as H and define "H2":
DeleteΔH = G - T
ΔH2 = Γ(G - T)”
OK. I think that a light has gone on in my head. I think I MIGHT understand what is going on. Let’s take a step back.
One of the most annoying things amongst many annoying things about economics is the number of different jargon terms for money. Some people talk about M0, M1, M2. M3; others talk about inside money and outside money; others talk about base money and broad money etc. The jargon being used in the Godley example was high-powered money which contrasts with something or other.
I find all of these terms confusing particularly when you have to move between these terms when reading different blogs and particularly when it sometimes feels like there are different interpretations of some of the terms e.g. I think that the US and the UK have different definitions of some of the Mx terms (or even which of the Mx terms are used), and I think that the UK redefined some of its Mx terms a while back.
I tend to think about central bank money (CENM) and commercial bank money (COMM) where these terms mean money issued by the relevant type of bank and used in transactions between the type of bank and its customers and transactions between those customers. The customers of the central bank are the government and the commercial banks. The customers of the commercial banks are everyone else. I mostly ignore notes and coins.
(The reason for thinking in this split relates to my previous comment (March 7 7:50PM) where I explained that CENM is created with no related loan while COMM is always created with a related loan. The accounting logic is different between the two types which is why they need to be considered separately).
For example, when you pay taxes, you pay with COMM as that is the money you have in your commercial bank account. However, your commercial bank then settles with the government with CENM.
I thought that the scope of the Godley example scenario was high-powered money which I interpreted as CENM (augmented by notes and coins) so there is no money multiplier. The money multiplier (MM) says that COMM = MM * CENM but is not relevant if we are dealing only with CENM. You seem to have a different interpretation of high-powered money, so you see the MM as being relevant to the Godley scenario. Your redefinition to split H and H2 makes sense of that. If you believe in the MM then H2 is a multiplier of H. I think that this has been a major source of the disagreement between you and the members of the school which cannot be named.
This is now a good starting point to take things forward. I’m going to attempt to put together an argument to explain the context of SFC models in plain terms. I am doing this as much for my own benefit as for anyone else as I’d like to be able to explain it to others and there is clearly a language barrier. I will publish this argument here in stages to see if it makes sense to you (I’m not looking for you to agree with the argument – just to understand it and to accept that you don’t need to be crazy to make the argument).
In the meantime, a question. You seem to want to make delta T a variable parameter. I might be missing something as it seems to me that the only data available to include in these models is quarterly GDP data and it will remain as quarterly data for the foreseeable future, so I don’t understand the need for a variable parameter except as an elegant solution just in case the data frequency changes sometime in the long-term future. Am I missing something here, or have I understood this correctly?
I think that they call
DeleteΔH = G - T
an accounting identity because within the model it holds for every interval of time. :)
Furthermore it is easy to operationalize it. I don't think that their verbal description is quite accurate, and they do not spell out what the government does when T > G. Nonetheless, whatever the government does, the equation holds for any interval of time.
How do you operationalize Γ? That's a real question.
I have offered two possible ways. One is to create and adjust a government account with H in it, so that it reflects the difference between Γ(G-T) and G-T. The other is like an exchange rate between what the government spends and taxes and what the non-government sector receives from the government or pays to it.
You have sort of suggested a relation to the money multiplier, but to do so you have introduced a different kind of money, which I suppose to be private bank money, except that there are no private banks in the simple example.
Without an operational definition of Γ, I don't understand your equation.
I added a calculation of the system time constant on the sheet using Ramanan's "scale alpha2" approach to adjusting for different sample periods (Ts). By the restrictions on alpha2, and without changing theta=0.2 or alpha1=0.6, I find the time constant (Tc) varies as such
DeleteAs Ts -> 0, Tc -> 6.5 years
Ts = 1 then Tc = 5.986 years
As Ts -> 1.5, then Tc -> 5.717 years
A total change of about 14% max.
For different allowed values of theta, alpha1 and alpha2(Ts=1), I find I can get Tc to vary over a much wider range, in terms of ratios (small ranges in terms of absolute value).
For example with alpha1, alpha2 and theta near 1, then Tc varies from about 0.145 (when Ts=1) to 1 (when Ts -> 0), or by a factor of about 690%.
Jamie,
DeleteI realize I probably should have written ΔH0 and ΔH instead of ΔH and ΔH2 -- because the latter is what is coupled to consumption (and the rest of the model). The issue is that the money multiplier is a free parameter in the model -- necessary to fit the rate of adjustment to the steady state.
What Godley and Lavoie have done is derive a formula for the decay of Carbon-14 that is ~ exp(-t) with t measured in
units of say ~ 1000 years. If they tried to compare their model with data, it would be off. And they'd have no way to fix it! And it wouldn't work for different atoms!
What they need is ~ exp(t/τ) so they can fit τ to the data. And that is what you should get theoretically.
They want to leave off the τ in order to interpret the model in terms of accounting. But the real interpretation is that the equation ΔH = G - T is a definition -- it defines the rate of approach to the steady state. Since the rate of approach to the steady state could be anything a priori, you need ΔH = Γ(G - T).
RE: changing Δt, it's not that I want to change it, but rather that changing Δt illustrates the dependence on an the time scale Δt/ Γ.
Bill:
You said: Without an operational definition of Γ, I don't understand your equation.
As I mention in my reply to Jamie, Γ defines the rate of approach to the steady state τ = Δt/ Γ. It could also be thought of as a money multiplier where you'd say:
Γ = ΔH/(G - T)
where the numerator is money (like M2) and the denominator is "base money".
One way to look at is that accounting can't define differential equations and therefore cannot define finite difference equations. Therefore any set of accounting relationships does not pin down a specific model. If you want to claim it does pin down a specific model, then what you are actually pinning down is a differential structure that does not correspond to our usual notions of time.
If say you see an economic adjustment to the same stimulus happening faster in one country than another, you are forced to assume there is time dilation ... time is passing faster for them than for you, because the accounting is invariant.
Now this was a weird idea when Einstein suggested the speed of light is constant (and physics is invariant under changes in velocity) ... but it works experimentally.
But to assume accounting is invariant and when things happen faster or slower it's because time is changing ... well, that's just not something I can accept.
I probably should have used gravity and general relativity in this example -- because what we have here is an issue of acceleration (second derivative) not velocity (first derivative), but that example gets at the gist of it.
Jason, Thanks for your reply. I’m not sure I understood it but maybe it’s me. Let’s try a different tack regarding your money multiplier (MM) and if this gets nowhere, I’ll call a halt. I am assuming that we are talking about an economy with CENM and COMM rather than the original example as MM is applicable only for COMM.
DeleteDo you have any empirical validation of the money multiplier (MM)? My intuition is that QE has shown that there was no proportional change in COMM when QE increased CENM by a huge amount. If you think the MM is valid, do you think that MM is a constant as shown in your equation above, or a function?
Based on my earlier post about the accounting of money creation, ΔH2 would be something like
ΔH2 = monetary value of the change in outstanding commercial bank loans = value of new loans – value of repaid loans. In outline, this would be the “just accounting” alternative to MM.
PS, Have a look at this Fed paper which asks “Does the Money Multiplier Exist?”. Here are two extracts from the conclusion:
“Changes in reserves are unrelated to changes in lending, and open market operations do not have a direct impact on lending. We conclude that the textbook treatment of money in the transmission mechanism can be rejected. Specifically, our results indicate that bank loan supply does not respond to changes in monetary policy through a bank lending channel, no matter how we group the banks”
and
“… but the narrow, textbook money multiplier does not appear to be a useful means of assessing the implications of monetary policy for future money growth or bank lending”
http://www.federalreserve.gov/pubs/feds/2010/201041/201041pap.pdf
Jason:
Delete"Γ defines the rate of approach to the steady state τ = Δt/ Γ. It could also be thought of as a money multiplier where you'd say:
Γ = ΔH/(G - T)
where the numerator is money (like M2) and the denominator is "base money"."
So my bank account analogy where I deposit ¥100,000 in cash in my bank account and Γ = 1.2, so my account balance increases by ¥120,000 gets to the operational definition of Γ, because of the difference between cash and bank money?
I think you will say no, but what is the difference, operationally?
Jamie,
DeleteActually, one of the results in the information transfer model is that the money multiplier (ie. information transfer index for M2 to M0 with M0 being physical currency) falls over time. Base reserves never seemed to have any effect except on short term interest rates.
Bill,
That isn't how macro works. If the government prints up ¥100,000 that is held by banks, our net holdings of deposits (M1) go up (to say ¥120,000) because the banks make loans on that ¥100,000.
In your example, if you deposit ¥100,000 in the bank, that bank will make loans against it (say ¥20,000), crediting another customer -- not necessarily making the loan to you.
However, you can actually sort of do what you say: if you make a big enough deposit, a bank could give you a line of credit on top of your bank account. They don't usually do it in the form of a loan where they deposit money in your account right away, but functionally it is similar.
Jason:
Delete"If the government prints up ¥100,000 that is held by banks, our net holdings of deposits (M1) go up (to say ¥120,000) because the banks make loans on that ¥100,000."
Right, in the real world. :) However, in the toy economy of their simple model there are no banks and no loans, only cash:
"We shall eventually cover both types of money creation and destruction. But we have reluctantly come to the conclusion that it is impossible to deploy a really simple model of a complete monetary economy in which inside and outside money both make their appearance at the outset. We have therefore decided to start by constructing and studying a hypothetical economy in which there is **no private money at all**, that is, a world where there are **no banks**, where producers **need not borrow** to produce, and hence a world where there are no interest payments." (p. 57) Emphasis mine.
Yes, and the way you interpret the G&L model is that if you use the equation:
DeleteΔH = G - T
it it actually ambiguous as to what H really is -- the meaning of H becomes dependent on the time step. Adding the time scale τ = Δt/ Γ to the equation gives us
ΔH = (Δt/τ)(G - T)
nails down what H is -- effectively choosing the multiplier for government debt (= "money") and what you decide to call H (= "high powered money"). If you choose the time scale to effectively follow the path of G - T, then (and only then) does H mean what G&L say it means.
Otherwise, it's a bit like a "gauge" in physics. I can add an arbitrary constant to H because
ΔH = H - H(-1) = H + C - (H(-1) + C) = ΔH
It is this ambiguity that makes the definition of money ambiguous -- there's no definition of money I know where it doesn't matter the level (H), only changes (ΔH).
I understand what G&L want to do, but this equation doesn't do it. Really, they want H = D where D is government debt -- the integral of government spending minus taxes.
Using the equation H = D however, removes all dynamics from the model! What remains are just a series of linear transformations. Given a value of G, you could solve for everything else up to an overall scale (i.e. definition of a dollar).
PS -- that brings up another thing: accounting has an overall scale degree of freedom. If I multiply every dollar by 100, the accounting all still works out. If the G&L model was all just accounting, there'd be no way to determine the overall level of anything!
Re: model dynamics, it's the feedback loop in the upper left of this block diagram of SIM that creates those. And a big component of that loop is A, which is dependent on alpha1 and alpha2 (our two behavioral parameters). The tax rate theta you could consider the government's behavior parameter I guess. The way they're put together I suppose results from the "accounting." The structure of A is interesting too: you can see in it G&L's implicit approximation to exp(a*t) as 1 + a*t. a <= 0 for a stable system.
Delete1 - alpha2*(other terms).
Thus when Ramanan says to adjust alpha2 for different sample periods (for example divide it by N if you divide the sample period by N). He's trying to keep the time constant approximately the same in their approximation to exp(a*t).
Jason, in the toy economy with no private banks, how do you get a multiplier for government debt? Please spell out how that happens. Thanks. :)
DeleteBill
DeleteIt's that the definition of H is ambiguous numerically, so it could have a multiplier or not. That equation does not define H to be equal to government debt. It could be. But it doesn't have to.
You can interpret the degree of freedom allowed by that ambiguity as a multiplier.
Another way to say it is that the toy model doesn't define the situation it says it does.
What G&L need is a mechanism that says there's no multiplier -- since the equations they wrote down are ambiguous.
Bill, just go back to the previous post and toggle between the last two pairs of graphs until you get it.
DeleteIt's more of a velocity multiplier, but if you can't get past the idea that the equation is ambiguous (or rather has implicit assumptions that there's no reason to make in the model so should be left as a parameter) then it's going to be difficult.
I replied to Greg on the following post with an example that might help.
Jason (in comments on previous post) “I didn't have any particular issues with the model results -- although the source of that original 20 € is notably absent in a model purportedly based on accounting :)”
ReplyDeleteIn the real world, the central bank (part of government in this model) can create new money from thin air. All financial assets are created with equal and opposite liabilities so that the rules of accounting are maintained. This is true of money, loans and bonds. Unfortunately, most economists don’t understand this.
When you take out a loan with a commercial bank, the bank creates the money and the loan as something like
0 = money as asset + money as liability
AND
0 = loan as asset + loan as liability.
When it makes the loan to the customer, it transfers money as asset and loan as liability to the customer, so we then have
Bank has: loan as asset and money as liability.
Customer has: money as asset and loan as liability.
The customer then spends the money with a third party, so we then have:
Bank has: loan as asset and money as liability.
Customer has: loan as liability (and whatever goods or services it has purchased from the third party).
Third party has: money as asset.
Note that the financial assets and liabilities in the economy add to zero throughout this sequence so the rules of accounting are followed at all times.
I have a very speculative theory that the reason that your models work best with base money is that base money is created without a loan element while commercial bank money always has a loan element.
Base money: money as asset is injected into the economy.
Commercial bank money: money as asset AND loan as liability are injected into the economy.
If money as asset has a positive velocity, loan as liability must have a negative velocity.
Note that Steve Keen’s main theory is that it is the build-up of loans as liability in the economy as a result of commercial bank money creation that causes instability in the economy. In boom times, people don’t worry about their loans as liability so they don’t have much negative velocity. In recessions, the negative velocity of the loans as liability overwhelms the positive velocity of the money as asset. When someone pays back a loan, they destroy both their own loan as liability and money as asset as well as the bank’s loan as asset and money as liability. I don’t know whether Steve’s theory has merit but it is at least plausible and consistent with an accounting understanding.
It is this type of accounting logic that is mostly missing from mainstream economics.
Summarizing what I've learned, the modeler is faced with several choice. Given that it's desirable to:
ReplyDelete1. Be "stock-flow consistent" and satisfy all G&L's equations given any pair of sample times.
2. Be invariant to the sample period.
3. Match G&L's SIM results for the sample period Ts=1
Then this is a tall order. You can use G&L's method, but you won't be perfectly invariant to the sample period. You can use my method, but then you won't be perfectly SFC. You can use a modified version of my method (integrate T) and be both sample period invariant and SFC, but you won't perfectly match G&L's model at Ts=1. Finally, you can build a continuous time model of (the equivalent of) a zero-order hold system (to represent the tax collection rate (T) function in particular), with hold periods = 1. Then you can do everything on the list, but it'll be an ugly pain in the ass, and it won't even be a unique solution (rather than ZOH you can use other functions that have the same definite integral over G&L's sample periods).
Did I miss anything?
Hi, Tom. :)
DeleteI am not sure what you mean by being invariant to the sample period. For instance, do you consider something like an interest rate of 2% per year to be invariant to the sample period, even though you have to adjust it to the period? If not, I don't see why invariance to the sample period is desirable.
Actually, I think I satisfy all three here.
DeleteBill, yes a 2% continuously compounded annual interest rate is invariant to the time period. (The 2% annual interest rate is a scale itself.)
DeleteIn the model, if you change the time scale observable things in the real world change. This is not desirable in any model. If I decide to measure time in weeks rather than months, it shouldn't change how fast people adjust to government spending shocks.
In the model discussed above, if you change from weeks to months, the time it takes people to adjust to a government spending shock changes by a factor of 50. Since it is just a re-labeling, that doesn't make any sense. There are time series measured with months and with quarters on FRED -- the two series aren't different in any way besides resolution:
https://research.stlouisfed.org/fred2/graph/?g=3KoM
Check it out; except for resolution, GDP is basically the same if you measure GDP in years or quarters. In the model above, one of the measures would be multiplied by 4 if you changed from quarters to years. And that's in addition to the adjustment for the seasonally adjusted annual rate.
Thanks, guys! :)
DeleteBill, given the annual interest rate r percent per year, compounded every T years (consider T <= 1), then after 1 year, the actual annual rate realized (rr) is:
Deleterr = ((1 + r*T/100)^(1/T) - 1)*100 percent
lim T -> 0, rr = (exp(r/100)-1)*100 percent
So for r = 2 percent
T = 1 gives rr = 2
T = 1/4 gives rr = 2.015 (quarterly)
T = 1/365 gives rr = 2.02 (daily)
And as T -> 0, rr -> 2.02 (continuously)
Not a huge change. But for r = 200 percent, a different story:
T = 1 gives rr = 200
T = 1/4 gives rr = 406 (quarterly)
T = 1/365 gives rr = 635 (daily)
And as T -> 0, rr -> 639 (continuously)
So it can make a substantial difference. G&L's SIM model (for example) only assumes the compounding is done at the sample times, so clearly the answer they get after a year depends on how many sample times per year. Whether it's a big effect or not depends on the rates involved.
Bill, measuring time in years in my example, you find a parameter A such that exp(A*t) passes through any of the points produced by those compounding cases. Then the time constant Tc = 1/A for each system.
Delete2% per year, compounded 1 time / year: Tc = 50.498 years
2% per year, compounded 4 time / year: Tc = 50.125 years
2% per year, compounded continuously: Tc = 50 years
What about for 200%?
1 time / year, Tc = 0.189 years (9.8 weeks)
continuously, Tc = 0.005 years (44 hours)
0.189/0.005 = a factor of 37.7!
Bill, one way to look at SIM here.
DeleteTom, there are two equilibrations in their SIM model. One occurs across time periods, one occurs within them. The within period equilbrium value of GDP is given by this equation:
DeleteY∗ = G/(1 − α1 · (1 − θ))
What happens after within period equilibrium is reached? Apparently nothing, until the next time period.
The model does not work if the time period is too short for within period equilibration, so there is a minimum time period. It doesn't make much sense to have an extended period within which nothing happens after equilibration, either. So I think that there is an implied time period in the model, we just don't know what it is. ;)
Bill, you lost me there. In my formulation Y,T,YD and C are outputs of a system who's dynamics are completely determined by a feedback loop with H:
DeleteH[n+1] = A*H[n]
with an exogenous input G added:
H[n+1] = A*H[n] + B*G[n+1]
A(Ts=1) = 1 - θ∙α2/(1 - α1 + θ∙α1)
B(Ts=1) = 1 - θ/(1 - α1 + θ∙α1)
A and B are *ONLY* functions of the sample period (Ts=1 period) and the parameters α1, α2 and θ, as I indicate in the block diagram here. The system dynamics (namely a time constant Tc in this case) are completely determined by that feedback loop. I discuss how in this comment.
As for Y, T, YD and C: they're all just scaled versions of H offset by a scaled version of input G (again see my block diagram). All the scaling parameters for these outputs are ONLY functions of α1, α2 and θ. I give what they are (written out as text) in the table in the lower left hand portion of the spreadsheet on this page.
I'm not sure what you mean by a "within period equilibrium." But it's easy to see what the steady state ("across time period?") equilibrium is: just remove the sample indices:
Hss = A*Hss + B*Gss
Assuming G's steady state, measured as dollars/(initial period Ts) is Gss, and H's steady state is Hss. The answer is
Hss = B*Gss/(1-A)
I can also calculate Yss, Tss, YDss and Css as functions of Hss, Gss, α1, α2 and θ.
And as I discuss in that comment I link to above, the time constant (Tc) at Ts=1 is:
Tc = -Ts/log(1-θ*α2/(1-α1+θ*α1)) = 5.9861 periods
Regardless of G or the initial value of H.
Can you tell me how you arrived at
Y∗ = G/(1 − α1 · (1 − θ))
?
You can leave a comment for me directly here if you want to give Jason a break from chatter about this problem. :D
Bill, also, I discuss in a comment here (intended for Henry) and the one below it how to transform that block diagram into one for an equivalent continuous time system with an identical time constant. As long as you can express g(t) for t>=0 (representing the integral of all government spending from -inf to t) as a Taylor expansion about t=0 with a finite number of terms, it's quite easy to construct an equivalent discrete time system (i.e. one producing the exact same answers as the continuous case at all the sample times). Doing so, however, requires expanding the scalar B into a row vector, one element per Taylor expansion term of g at t=0. Of course that may not be a very useful exercise!
DeleteAnyway, I'm sure Jason is sick of this, so leave a comment for me anywhere on that blog if you want.
Bill, here's an updated pair of block diagrams with the discrete and continuous time versions side by side. In general X[n] = x(t=n*Ts), and x' = dx/dt.
DeleteBill, OK, I fleshed out the case where g' = can be expressed as a Taylor series + Dirac deltas, where g represents the integral of all government spending. I called it SIM4, and it's just the math. I think it's right. ... as if you care! Lol
DeleteTom and Jason,
ReplyDeleteIt seems that you are barking at the wrong tree. The true NIPA/FOFA/SNA2008 accounting identity is this:
G(t) + nonG(t) - T(t) = GS(Gov Saving)(t) - GI(Gov Investment)(t) = Flow_of_Money(t)
NonG = Non-discretionary spending from government.
G = GC(Gov Consumption) + GI(Gov Investment)
Change of Stock of Money ΔH = Stock-of-Money(t) – Stock-of-Money (t-1)
Stock-of-Money(t) = Revaluation[Stock-of-Money (t-1)] + Flow-Of-Money(t)
Thus ΔH = Revaluation[Stock-of-Money(t-1)] – Stock-of-Money(t-1) + Flow-Of-Money(t)
Unless Government’s Stock-of-Money(t-1) consists of cash-like financial instruments and no change in current market value,
ΔH will not be equal to Flow-Of-Money(t)!!!
It is a flow balance, not stock balance. Also, there is no stock-flow balance in NIPA/FOFA/SNA2008 SF balance matrices. Do not mix stock variables with flow variables incorrectly.
For correct representation of temporal data,
stock variables are represented by time-snapshot based valid-time,and flow variables are represented by time-interval based valid-time. Both time representations are time-periods:
[start-time end-time] and [0 end-time-snapshot]
Peiya,
DeleteThat isn't how it is set up in the model under consideration (from Godley and Lavoie). In the model, government debt is basically cash.
Bill, last comment on this: one (narrow) way interpret the issue is that G&L implicitly make the following approximation:
ReplyDeleteexp(a*t) ≈ 1 + a*t
Which only holds when |a*t| << 1
Just for laughs, I did another version of SIM (SIM4) (just the expressions): the goal being to match G&L at Ts=1, stay sample period invariant, always satisfy ΔH = G - T, and to allow for a wider class of government spending functions.
ReplyDeleteBTW, making some assumptions (continuous compounding and all variables=0 at t=0, and restricted average gov spending functions per period (G)), I did finally make a version of SIM (SIM6) that satisfies ALL G&L's equations (I think) (including gamma=1) at all times, matches their results and is sample-period invariant. I do it by adjusting alpha1 AND aphla2 for changes in the sample period. Adjusting theta wasn't necessary. I describe the adjustment procedure here.
ReplyDeleteThinking about this:
ReplyDelete(1) ΔH = Γ*(G - T)
In terms of my own formulation, where I have for the dynamics of H and measurement for T:
(2) H[n+1] = A*H[n] + B*G[n+1]
(3) T[n+1] = CT*H[n] + DT*G[n+1]
Where
A = 1-α2θ/(1-α1(1-θ))
B = 1-θ/(1-α1(1-θ))
CT = α2θ/(1-α1(1-θ))
DT = θ/(1-α1(1-θ))
Substituting (3) into (1) indeed produces (2) when Γ=1.
Then Γ ≠ 1 is equivalent to multiplying both α2 and B by Γ, which is commentator A H's approximate prescription for moving to a new sample period Ts2 (in this case Γ times the old sample period Ts1, i.e Ts2 = Γ*Ts1) with the same steady state for H, except in this case without changing the time steps on the plot. So without changing the time steps on the plot Γ = 0.5 should approximately double the apparent time constant Tc (AKA "adjustment time"), using the approximation exp(-t/Tc) ≈ 1 - t/Tc.
... another way to say that is Γ = 0.5 approximately halves the sample period of the system, but we keep calling each sample period "1 period" instead of "half a period" so it appears to approximately double the time constant instead.
Delete