Thursday, March 3, 2016

More like stock-flow inconsistent

One (but by far not the only) tool Post Keynesians tend to use is a stock-flow consistent (SFC) analysis. My original intent was to show how these could be related to information equilibrium, but instead seem to have found a major flaw. I'd like to show that models like these can sneak in implicit assumptions under the guise of "just accounting".

TL;DR version: $\Delta$ in SFC models has units of 1/time and therefore assumes a fundamental time scale on the order of a time step.

What follows is from Godley-Lavoie "Monetary Economics" [pdf], specifically their model called SIM (for "simplest"). I refer you to that pdf for any questions about that model, the symbols, etc. They start with one of these accounting matrices:

The subscript d's and s's (and h's) don't matter right now (I'm not sure if they ever do, but they don't for my conclusion), and $WN = Y$ in their numerical example (also doesn't matter for the conclusion). This sets up the equations:

where the subscript $X_{-1}$ means the previous period. Equations 3.6 and 3.7 are what I'd call "behavioral accounting" -- the tax rate set by the government set up by the agents (here assumed to be 20%) and how the agents split consumption among disposable income and holding money (deposits, here assumed to be 0.6 and 0.4, respectively). Godley-Lavoie then run a simulation, with the following results:

I was able to reproduce this without too much trouble. I highly recommend converting this model to what I call "differential" form because these equations are a) maddeningly redundant and b) sneak in some implicit assumptions that become more clear in differential form. Let's take the current time index to be $t = 0$ (if there is no index, assume it to be $t = 0$). First define

\Delta X_{t} = X_{t} - X_{t-1}

So the set of equations above (I dropped the redundant $Y = WN$ in this model) becomes (in my notes, I used D to refer to YD ... may be some typos, so ignore them, not important to the conclusion):

\Delta YD = \Delta Y - \Delta T

\Delta T = \theta \Delta Y

\Delta C = \alpha_{1} \Delta YD + \alpha_{2} \Delta H_{-1}

\Delta \Delta H = \Delta H - \Delta H_{-1} = \Delta G - \Delta T = \Delta YD - \Delta C

\Delta Y = \Delta C + \Delta G

A bit of algebra (helpfully done by Mathematica) can simplify your life greatly:

so we have

\Delta C = \frac{\alpha_{1} (1-\theta)\Delta G + \alpha_{2} \Delta H_{-1}}{1 - \alpha_{1} + \alpha_{1} \theta}

And if we run this, we get the same results as Godley-Lavoie -- so I'm not doing it wrong, and in particular I haven't misunderstood what is going on:

Fine. A burst of government deficit spending creates an economy from nothing. Well, I guess the PK interpretation is different. The government borrows 20 € from somewhere and proceeds to spend all of it, causing consumption and the money stock to increase. Or something. Yes, I know it's supposed to be a cheesy model. The various interpretations of the results aren't relevant to my conclusion.


Did anyone see where the PK's managed to sneak in an assumption under the guise of "just accounting"? Check out this equation:

\Delta \Delta H = \Delta G - \Delta T

In case you aren't familiar with finite differences, this is a second derivative on the left and two first derivatives on the right. Now each time period is "one unit", i.e. $\Delta t = 1$. Let's put them back in:

\frac{\Delta \Delta H}{\Delta t^{2}} = \frac{\Delta G}{\Delta t} - \frac{\Delta T}{\Delta t}

Oh, wait, this doesn't quite work mathematically (divided one side by $\Delta t^{2}$ and the other by $\Delta t$) unless $\Delta t = 1$ ... let's fix that with a timescale $\tau$ (which we could e.g. pull out of the timescale over which $Y$ -- and therefore $G$ and $C$ -- is measured):

\frac{\Delta \Delta H}{\Delta t^{2}} = \frac{1}{\tau} \left(\frac{\Delta G}{\Delta t} - \frac{\Delta T}{\Delta t} \right)

If $\tau = \Delta t$ then that all works out mathematically -- I just divided both sides by $\Delta t^{2}$. But that's the thing! The "accounting" that says

\Delta H = G - T

makes an implicit assumption about the time scale of adjustments (i.e. same as the time scale of the measurements of $G$, $Y$, etc). If you were watching closely, you would have noticed this in the graph of the adjustment:

Where does this time scale come from over which the adjustment happens? There is some decay constant (half life). It's never specified (more on scales here and here). If you think this unspecified time scale doesn't matter, then we can take $\Delta t \rightarrow \ell_{P}$ and the adjustment happens instantaneously. Every model would achieve its steady state in the Planck time.

Also note that the time scale cannot be anything else! If it was, then we'd have:

\tau \Delta H = G - T

But then $\tau$ is effectively a money multiplier. And money multipliers are anathema to Post Keynesians. Ok, you say -- this is a simple model SIM. But even the complex ones have terms with $\Delta X$ coupled to terms without $\Delta$'s ... and you have to insert these time scales every time you have them:

Otherwise you are assuming there is some sort of time scale in your model on the order of a time period. The lack of explicit time scales seems to cover up the implicit money multipliers.

Now I'm pretty sure Post Keynesians don't know about this. Why? Because even famous mainstream economists don't seem to know about this. The DSGE model framework, written in terms of log-linear variables escapes the problems associated with these implicit time scales because they are log-linear, there are coefficients that make everything dimensionless -- those timescales are still implicit, but can be changed via different fit parameters. They aren't implicit and set equal to one like in SFC models. And that is a problem. If you assume "1 quarter" is a fundamental constant of economics akin to the Plank time in physics (or other time scales like the pion decay constant), you can get all kinds of adjustments and fluctuations that come from nowhere that are on the scale of "a few quarters".


Update 4 March 2016

There was a request from Cameron Murray below for the Mathematica notebook (let me know if my Google drive settings aren't working):

stock flow.nb

I also edited the text a bit. It's not really the dimensional analysis so much as there's an implicit time scale -- you can't freely change the time step without fundamentally changing the process. You can make the model independent of the time step size -- but that involves adding a time scale ($\tau$, above).

Update 4 March 2016, the second

Commenter Ramanan below steadfastly refuses to see how the relationship $\Delta H = G - T$ is a model assumption with a dimensionless parameter I'll call $\Gamma = 1$. You can take $\Delta H = \Gamma (G - T)$ and change nothing but the rate of adjustment. Here are two versions of the model -- one with  $\Gamma = 0.5$ and one with $\Gamma = 1.0$. There is nothing different about the steady state, only the curvature of the adjustment period has changed. Accounting "identities" are preserved through the entire process.

$\Gamma$ does not have to be 1. It should be a free parameter. Saying $\Delta H = G - T$ is an accounting identity is some slight of hand that slips the model assumption that $\Gamma = 1$ (which governs the rate of adjustment) into the model.

Update 4 March 2016, the third

Commenter Ramanan below seems to believe taking $\Gamma \neq 1$ in $\Delta H = \Gamma (G - T)$ means assets $\neq$ liabilities. The thing is: the only change between the two systems is the rate of approach to the steady state. The steady states themselves are the same regardless of the value of $\Gamma$. What mysterious force jumped in to restore the equality of assets and liabilities at the steady state?

You can toggle back and forth between those two pictures. The only difference is the rate of approach to the steady state.

Update 5 March 2016, the FINAL

[Ed. corrected a math error (unlike some people). $\Delta H$ is not equal to $\Delta \Delta H$ and there was a sign error -- I wrote $G + T$ for some reason. Neither of these change the result, except the differential equation isn't trivial anymore.]

Ramanan has put up a post where he continues to fail to understand the issue. He's probably too invested to admit mistake at this point, so this is more for everyone else (and my own sanity). Let's take $G = \gamma \Delta t$, where $\gamma$ is the rate of government spending and analogously $T = \xi \Delta t$. The equation

\Delta H = G - T


\Delta H = \gamma \Delta t - \xi \Delta t


\Delta \Delta H = \Delta\gamma \Delta t - \Delta\xi \Delta t

Or writing it out in long form:

H_{0} - 2 H_{-1} + H_{-2} = (\gamma_{0} - \gamma_{-1}) \Delta t - (\xi_{0} - \xi_{-1}) \Delta t

Divide through by $\Delta t^{2}$ (this is the finite difference version of the second time derivative):

\frac{\Delta \Delta H}{\Delta t^{2}} = \frac{\Delta\gamma \Delta t - \Delta\xi \Delta t}{\Delta t^{2}}

\frac{\Delta \Delta H}{\Delta t^{2}} = \frac{\Delta\gamma  - \Delta\xi }{\Delta t}

We can re-arrange and simplify (actually just move the $\Delta t^{2}$ to the other side)

\Delta \Delta H= \Delta\gamma \Delta t - \Delta\xi \Delta t

\Delta \Delta H= \Delta G - \Delta T

[It's not equal to $\Delta H$ as previously stated; but this is largely irrelevant. We can still extract the time scale, just not solve the equation directly.]

So we have

\frac{\Delta \Delta H}{\Delta t^{2}} = \frac{1}{\Delta t}\; \left(\frac{ \Delta G}{\Delta t} - \frac{ \Delta T}{\Delta t}\right)

Well, we can't take the limit as $\Delta t \rightarrow \infty$ because this equation blows up. So Ramanan's pass through to continuous time in his post and in the comments below is totally wrong. There's an infinity in there that hasn't been dealt with. However, if we put a dimensionless number that depends on the time step out front, say $\Gamma = \Delta t/\tau$, we can:

\frac{\Delta \Delta H}{\Delta t^{2}} = \frac{\Delta t/\tau}{\Delta t}\; \left(\frac{ \Delta G}{\Delta t} - \frac{ \Delta T}{\Delta t}\right)

This is the same equation if $\tau = \Delta t$ because $\Gamma = 1$ (that's the implicit assumption!!!). So in "continuous time" (or just the limit as $\Delta t \rightarrow 0$), we have:

\frac{d^{2}H}{dt^{2}} = \frac{1}{\tau}\; \left( \frac{dG}{dt} - \frac{dT}{dt} \right)

And there's your implicit time scale ($\tau$) made explicit.


  1. Honestly, no matter how much I try, I don't get SFC at all. All I see is that there is a budget constraint and then a bunch of equations for behavior -- so basically, every SFC model is a budget constraint plus a bunch of really ad hoc assumptions. Is the whole point supposed to be that SFC is an improvement on ISLM in the sense that it actually has a budget constraint?

    Also, in this sense, isn't literally every single DSGE model SFC, since they all have budget constraints?

    The entire thing just seems so useless...

    1. John, do the "ad hoc assumptions" you mention correspond to

      "Equations 3.6 and 3.7 are what I'd call "behavioral accounting" -- the tax rate set by the government set up by the agents (here assumed to be 20%) and how the agents split consumption among disposable income and holding money (deposits, here assumed to be 0.6 and 0.4, respectively)"

    2. Pretty much. Just assuming, for instance, that agents always consumer x% of their income just doesn't seem like proper economics. This is where neoclassical economics with utility maximization (or perhaps information transfer economics with entropy maximization) comes in; consumption behavior is not simply assumed; it is, at least with utility maximization, derived from assumptions about agents' preferences.

    3. "Also, in this sense, isn't literally every single DSGE model SFC, since they all have budget constraints?"

      Yes. There's no suggestion here that DSGE models are stock-flow inconsistent. (Sometimes, the solution method may violate strict consistency, but it probably does not materially impact on the results.) In principle, there is nothing to stop you using a SFC format, but using behavioural equations that are micro-founded (e.g if you're interested). It is not obvious to me that that a behavioural equation is any less ad hoc when derived from some arbitrary assumptions chosen primarily for computational tractability, but I do think it is important to consider a wide range of possible behaviours.

      The point about the different approaches is that they let us look at different things. SFC models help us understand scenarios where complex balance sheets play an important role, such as with financial intermediation. Again, in principle there is no reason why this should preclude the use of micro-founded behavioural functions, but in practice doing so a) leads to the use of more and more implausible assumptions for the sake of computational tractability; and b) increasingly obscures the essential mechanics. These models, as with any model, are simply intended to help aid our understanding of how certain complex systems work.

      That is not to say, we should not view the results with caution. In particular, typically SFC models do not use micro-founded behaviour or forward looking assumptions and we need to always consider what this implies for the results. But it would be a mistake to assume that just because a model is micro-founded that we could accept its implied conclusions without applying an equally critical eye.

    4. Nick,

      DSGE models are generally done in the real domain, often without regard to the fact that real quantities cannot be added or divided. So, they are generally not consistent accounting. That said, you point is largely correct--one can do DSGE on a SFC basis.

    5. Srini, is "done in the real domain" related to your comment on the previous post about "real identities" being nonsensical? Can you elaborate on that? What do you mean by a "real identity" and what do you mean that "real quantities cannot be added or divided?"

      pi + pi = 2*pi

      That can't be what you mean.

    6. Tom,

      Chain-weighted real components do not add up to the aggregate. C+I+G+NX does not add up to GDP if all are expressed in chain-weighted real terms. Whereas in nominal terms it has to add up. As a result, you cannot do rate of return on capital using real quantities, but it is frequently done in DSGE without regard to the fact that the GDP shares will then not add up to GDP.

    7. Nick,

      I can see why you might not like the assumptions of neoclassical economics, even though I personally don't have a problem with them. Still, I find it hard to believe that the flat out assertions about behavior that PK models have are any better, if not significantly worse, than unrealistic utility functions are.

    8. Srini, Ah, OK! Thanks. Real as opposed to nominal. Duh! I should have guessed that. Thanks.

    9. John,

      Personally, I probably have more time for PK views on behaviour, but I'm somewhat agnostic. Every time I think I've decided how people behave, I come across something which changes my view. I think it's really important to keep an open mind.

      However, I'd just like to make clear that my point was about SFC modelling techniques, rather than PK behavioural theories. Using an SFC framework doesn't prevent you using behavioural functions that are decidedly un-PK.


      I agree with your observation on the problems of accounting for real quantities, although I don't think I'd go so far as to say that this is more of a problem for DSGE than it is for SFC.

    10. Nick,

      "Every time I think I've decided how people behave, I come across something which changes my view."

      That's the beauty of the Information Transfer Model (ITM): nothing is more agnostic about human behavior!

  2. Interesting. You say

    "A burst of government deficit spending creates an economy from nothing. Well, I guess the PK interpretation is different. The government borrows 20 € from somewhere and proceeds to spend all of it, causing consumption and the money stock to increase. Or something."

    I'm not sure if you are aware, but you have sort of described how the British money system was created in 1694 to fund investment in the navy. The wealthy merchants agreed to 'fund' the government 1.2million pounds on the condition that they were given the monopoly on printing notes and became the exclusive bank to the government.

    For the rest of it, I will digest it some more. From what I can see you are right, but I think the interpretation of the adjustment and the time is economically important.

    Can you also share your Mathematica notebook please?

    1. Hi Cameron,

      I was aware of that -- they effectively created the Bank of England with that transaction (and why the Bank of England operates a bit differently than, say, the US Fed).

      I didn't have any particular issues with the model results -- although the source of that original 20 € is notably absent in a model purportedly based on accounting :)

      I put a link above and here is another one:

      stock flow.nb

      Let me know if google drive is configured properly.

  3. Does the time scale problem disappear if you accept by definition that delta H is one thing - velocity of a money stock- while delta-delta H is acceleration of that same stock , both over the same time scales.

    Isn't this standard money flow vs impulse stuff ?


    1. You can make the problem go away by adding in time scale parameters such that changes in your time step leave the results invariant. If Δt changes from 1 quarter to 1 year or 1 month, then the time to adjust to the steady state (in the model above) stays the same.

      As it is, it's about 20 units in the model above. Let's call them quarters. If I change the time step to 1 month, we go from a 5-year adjustment to just under 2 years.

      You can add parameters to make it so the adjustment still happens over 5 years (60 months) even if your time step is 1 month. The issue is that those parameters are "hidden" (implicit) in the standard formulation of SFC models.

  4. "Now I'm pretty sure Post Keynesians don't know about this. Why?"

    Perhaps the confusion is from your side?

    Take the equation relating G, T and H.

    There's no need for a time scale in this because G itself measures the expenditure in the time period.

    So if the US government spends 4tn in one year, G would be 1tn in one quarter.

    If you see the equations, there's no error in dimensional analysis. There's even a time scale in the analysis showing how much time it takes for fiscal policy to act.

    You are the one confusing dimensional analysis.

    P.S. String theory is not Greek to me :-)

    1. P.P.S

      See the mean-lag theorem in Appendix of Chapter 3 of Godley/Lavoie.

      That's some good dimensional analysis/time scale intuition.

    2. You are correct -- there is a time scale in G and T (Y is output over a year). But this time scale isn't explicit -- if I transform my time step to be 1 month, then the adjustment in the model above happens in 20 months. If it is quarters, then it is 5 years. There are explicit parameters (factors of your time step) in the model that are needed to make the adjustment period take the same amount of time regardless of your time step.

      That is to say the timescale tau that governs the adjustment process (think of it as a half life) is a deep economic parameter. Radioactive decay doesn't happen faster if I measure it at a faster rate -- it takes the same amount of time. That halflife of Carbon-14 or whatever is a deep physical parameter.

      The above model could be seen as a radioactive decay model where the halflife tau doesn't appear explicitly in it. It's implicit in the size of the time steps.

    3. Jason, if I'm discretizing a continuous time system (x' = A*x) as (x[n+1] = exp(A*T)*x[n]), then the matrix exp(A*T) will clearly depend on T. Are you saying that this dependence on T is somehow hidden or implicit in SFC analysis?

    4. Tom,

      I think that is it.

      There is an explicit time scale that is missing that should be a parameter (since we'd have to fit the adjustment time empirically).

    5. Jason,

      USe differential equations and solve the model.

      It will give you the same answer as G&L's text.

      The Lance Taylor review I linked does it in continuous time/differential equations.

      And do not violate accounting identities!!

    6. I did. As I show above. And got the same result.

      That's not going to fix the fact that there are hidden time scales ... that if I take Δt to Δt/2, I have to change a bunch of parameters (and variables definitions).

      If you say

      dH/dt = G - T

      You really should say

      G ≡ g/τ₀
      T ≡ ϑ/τ₀

      dH/dt = (1/τ₀) (g - ϑ)

      Don't hide the τ₀.

    7. "I did. As I show above. And got the same result."


      " that if I take Δt to Δt/2, I have to change a bunch of parameters (and variables definitions)."

      Perfectly fine. But what variable redefinition?

  5. P.S.

    Perhaps your confusion lies in not understanding how to translate difference equations to differential equations and vice versa.

    In continuous time

    dH/dt = G minus T.

    In difference equations, delta H = G minus T.

    You should read this book Economic Dynamics by Giancarlo Gandalfo.

    It shows how to go back and forth between difference and differential equations.

    1. Is it in any way different than an electrical engineer would do the same? Say, when designing a discrete time equivalent signal processing, tracking or feedback control algorithm to a continuous time circuit or conceptual design? (or vice versa?)

    2. Sometimes this can be viewed as mapping the S-plane to the Z-plane (and thus the left half plane to the unit disk, for causal systems), and vice versa.

    3. or replacing

      x' = A*x


      x[n+1] = exp(A*T)*x[n]

    4. clearly exp(A*T) is a function of T (the sample period).

    5. Ramanan:

      I'm not confused. I also did another round of differencing because it's a much easier way to address the numerical problem (see Mathematica code above).

      So I have

      d²H/dt² = dG/dt - dT/dt

      The problem isn't the equations -- it's the lack of a fundamental parameter with dimensions of time. The only parameters in the model are dimensionless (the alpha's and the theta). There needs to be a parameter with the dimension of time to relate the Δt to.

      That time scale for the adjustment process to the steady state solution has to come from somewhere.

    6. "The problem isn't the equations -- it's the lack of a fundamental parameter with dimensions of time. The only parameters in the model are dimensionless (the alpha's and the theta)."

      That's incorrect.

      Alpha2 has a dimension of inverse of time.

    7. "Alpha2 has a dimension of inverse of time."

      That's because your time scale is hidden!

      Also in the text it is given as 0.4. No units. It really is 0.4/Δt.

      And that's what I am saying. Those time scales are hidden because they're all equal to 1.

    8. And for more complex models, the sample period dependence of the elements of the matrix exponential are not so obvious to write down by inspection.

    9. matrix exponential or time varying transition matrix.

  6. As for the time scale, what about α1 and α2? Defined as propensity to consume from present income and past wealth, respectively. What distinguishes the present from the past, if not the time scale? If you alter the time scale, you have to alter α1 and α2 to get the same results.

    1. Exactly. If the time period is one quarter, alpha2 changes accordingly (alpha1 is dimensionless and doesn't change) and we obtain the same result.

    2. Instead of changing those parameters, why not just include an explicit time scale parameter so when your timestep changes you don't have to change everything else in your problem?

      That would be extremely helpful given you have to have some parameter to fit the duration of the adjustment process empirically in the first place!

  7. In other words,

    Write equations such as

    dH/dt = G - T

    and so on

    and solve the differential equations ...

    You'll get the same results as the ones obtained by difference equations given in the textbook!!

  8. "The government borrows 20 € from somewhere"

    There is no borrowing. It is simply created by virtue of balance sheet expansion.

    Government's never borrow in the same way that banks never borrow. Government is just a particular type of bank. They all just backfill from the circulation.

    Work is done ahead of payment. Loans are established ahead of advances. All these things take time to do.

    Action and money transfer don't happen in instantaneous time. Actions go on over time.

    In other words, in business and in the world, most things are done on credit. Which is money creation. Moving the actual numbers around is a process of *settlement*.

    1. Your points are fair Neil, although I'll debate. Jason's post is simply confusions due to his lack of familiarity with difference equations.

    2. Neil:

      "There is no borrowing. It is simply created by virtue of balance sheet expansion."

      The government G spends 20 € which aren't balanced by taxes. That means the money is borrowed.

      If H went up (the money was "printed" or "created"), then maybe what you said would be appropriate. But in the toy model above, G went up, which is government spending.

      In this model, government spending is financed by taxes or something that isn't specified in the model which I called borrowing.


      Whenever I write difference equations in the time domain, there are explicit parameters that indicate the fundamental time scales of the problem!

    3. Jason:

      "If H went up (the money was "printed" or "created"), then maybe what you said would be appropriate. But in the toy model above, G went up, which is government spending.

      "In this model, government spending is financed by taxes or something that isn't specified in the model which I called borrowing."

      IIUC, the model does specify it, and it is ΔH, the creation (when G > T) of "high powered money".
      ΔH = G - T

      is not an accounting identity, it is part of the model. The model applies to the US, Canada, and Japan, but not to Spain or California, which cannot create high powered money.

  9. Wrote you a post:

    Exercise: write the equations in differential equations form

    such as dH/dt = G minus T and obtain the same result as Godley/Lavoie!

    1. Hi, Ramanan!

      Your post confirmed my expectation that as Δt changes, so do other parameters. :)

      However, I think that Isaac Newton, who developed the theory of finite differences as well as calculus, would object to

      dH/dt = G minus T

      We have

      ΔH = G − T


      ∆H∕∆t = (G − T)∕∆t

      If you take the limit as ∆t goes to 0, you will not in general get

      dH/dt = G − T

      But why take the limit? Won't ∆H∕∆t work just as well as dH/dt? (Besides being an observed value?) Difference equations are just fine, no? :)

    2. I see in your post you write about physicists setting c = 1. But c is a fundamental constant of nature

      Is 1 quarter a fundamental constant of economics?

    3. C'mon Jason. You write a lot about Physics.

      Have you never seen c being set to 1.

      c is not a fundamental constant in a sense because you can choose your units to set it to 1.

      Fundamental constant is things such as the mass of the electron/quark etc.

    4. Bill,

      The error everyone is making here is to think of ∆ as the differential operator in differential calculus.

      That is not so.

      ∆H = G - T

      is consistent with

      dH/dt = G - T.

      I don't see any problem of taking limits to zero.

      When taking limits you have to be careful.

      In continuous time G is a rate.

      In discrete time G is the spending *IN* the whole interval.

    5. Yes, physicists set c = 1, but they have explicit energy scales in their models so that when you set c = 1 it doesn't change the size of an atom or make you re-write the Schrodinger equation.

      For example, when you set h = c = 1, the size scale of the electron goes from h/mc to 1/m. It doesn't make the scale disappear!

    6. In other words,

      Total spending in time ∆t is G∆t, not G. Similarly for taxes.


      ∆H∕∆t = (G − T)∆t∕∆t

      So dH/dt = G - T.

    7. " It doesn't make the scale disappear!"


      I choose a time period such as a quarter for writing difference equations. That sets a scale. Everything then can be expressed as proportional to it.

      So the ML of 2 means two quarters.

    8. Ramanan, so G and T are rates? It'd be nice if the table said so. Sorry, I should probably just go read G&L.

    9. "I choose a time period such as a quarter for writing difference equations. That sets a scale. Everything then can be expressed as proportional to it."

      That is entirely backwards. You have a fundamental time scale that comes from data or some other theory. You write your difference equations in time steps that are proportional to that scale.

    10. It seems like the sample period should be selected based on the bandwidth of the underlying continuous process.

    11. "In continuous time G is a rate." Ah, missed that.

    12. Why is that backward?

      I mean the whole book can be written in differential equation form. Indeed Lance Taylor does that.

      There's nothing gained if the results are the same.

    13. Ramanan:

      "Total spending in time ∆t is G∆t, not G"


      That's the sound of my high school science teacher grasping his throat and choking to death.

      The book defines G as "Pure government expenditures in nominal terms". "Between time t(n-1) and time t(n)" is assumed. ∆t = t(n) - t(n-1). It is a duration of time, not an operator.

      ∆H = G − T

      means that amount of high powered money increases or decreases during the understood time period by G − T. H, G, and T are all measured in money, in nominal terms.


      ∆H = (G − T)∆t

      is causing my old teacher to gag, because the left side is measured in money while the right side is measured in money x time. Tilt!

    14. Bill, I think I see the confusion. Note that Ramanan writes this above:

      "In continuous time G is a rate."

      he THEN goes on to give you this:

      ∆H∕∆t = (G − T)∆t∕∆t

      In his next comment. I think you're supposed to still be assuming that's continuous time G and continuous time T in that comment (i.e. both are rates in that comment). Make sense?

    15. "∆H = (G − T)∆t

      is causing my old teacher to gag, because the left side is measured in money while the right side is measured in money x time. Tilt!"

      H is measured in money, but ∆H is the change in the money stock over a time interval, is it not?.

    16. Bill,

      The G in continuous time is not the same as G in discrete time.

      G in continuous time is a rate.

      In other words, the two Gs in

      ∆H = (G − T)

      dH/dt = G - T

      are different.

      Just by pure dimensional analysis, you can see that.

      In differential equation form G has a dimension time[-1]

      In differeNCE equation form G has no time dimension.

      It's better to put subscripts to avoid the confusion.

      But if one sticks to difference equations, it's better to avoid the subscripts which clutter equations.

      What Jason is confusing is nothing else but this.

      To avoid all confusions put subscripts in G.

    17. OK, Ramanan, I think I see what you are getting at.

      Consider Y = X^2 where the X's are the integers. We have

      X: 0 1 2 3 4 . . .
      Y: 0 1 4 9 16 . . .
      ∆Y: 1 3 5 7 . . .

      The ∆Ys are centered on 0.5, 1.5, etc., so we have

      ∆Y = 2X = dY/dX

      Now let the X's be the even integers.

      X: 0 2 4 6 . . .
      Y: 0 4 16 36 . . .
      ∆Y: 4 12 20 . . .

      ∆Y = 4X = 2dY/dX = ∆XdY/dX

      In general, ∆Y/∆X = dY/dX. :)

      Now consider H as a function of time, t, measured in equal intervals. We have

      dH/dt = ∆H/∆t.

      There is no question of taking ∆t to 0, which might be dicey, anyway. Since

      ∆H = G − T,

      we also have

      dH/dt = (G − T)/∆t. :)

    18. Oops! I wrote: "In general, ∆Y/∆X = dY/dX"

      No, that's not right, either, is it?

      I'll buy ∆Y/∆X ≅ dY/dX, which is good enough for the purposes of the model, I expect.

      But that still yields

      dH/dt ≅ (G − T)/∆t = G/∆t − T/∆t

    19. Bill,

      Check my blog ... 2nd last post. I have written it a more accessible manner.

  10. Sorry for multiple comments. Hope that's fine. I am writing whenever I get time from everyday matters.

    About time adjustment, in Appendix of Chapter 3, you'll find the mean lag equal to

    (1 minus alpha1)/alpha2 times (1 minus theta)/theta

    In discrete time, you choose a time period such as a quarter And if the above expression turns out to be 4, then the mean lag is one year.

    1. That is fine.

      But if you choose your time step to be 1 year, then that expression turns out to be 4 years.

      That is to say Δt is your deep economic time scale (time for agents to find the new equilibrium). Your other time scales are proportional to it -- as you write

      mean lag = ((1 - alpha1)/alpha2) * ((1 - theta)/theta) Δt

      That Δt should be a parameter, not your time step!

      mean lag = ((1 - alpha1)/alpha2) * ((1 - theta)/theta) * tau

    2. "That Δt should be a parameter, not your time step!"

      The time interval is as per my liking. I mean why is it a parameter. I choose what I want.

    3. No you can't -- not without changing all of the variable definitions and changing the parameters.

      Let's say I observe that the adjustment process takes a bit longer than the 20 time steps (we'll call them quarters) or has a slightly different curvature?

      How do I fit the model to the data?

    4. Sorry I don't understand your point.

      Let's say the continuous time version/differential equation version has the parameter alpha2 = 0.4/yr.

      If I choose a quarter, in difference equations, alpha2 is 0.1.

      Is that so hard to understand?

    5. You also have to change variable definitions and the equation dH/dt = G - T gets a dimensionless coefficient out front that is invisible in the formulation where the time step is one unit.

    6. I'm not saying it is impossible to deal with -- it's just not transparent. There is an implicit time scale in the model formulation.

    7. So let there be an implicit time scale. What's the problem?

    8. It's an implicit assumption and it takes on a specific value that defines the time-behavior of the model. And it enters though something that is purportedly "just accounting" -- i.e. ΔH = G - T. But it's not just accounting; it's an assumption about the time behavior of the model.

      It's like saying your model for exponential decay is

      N = N₀ exp(-n)

      That follows from "just accounting" for the atoms and the decay products with the equation dN/dn = -N.

      But it's really

      N = N₀ exp(-n Δt/τ₀)

      and the equation is dN/dt = - N/τ₀. That τ₀ does not follow from "just accounting" -- it's a physical property of the system (based on e.g. quantum tunneling). You can take Δt = τ₀, but you need to show that. And it might be different for different atoms.

    9. Jason,

      There are many things.

      First there are behavioural hypotheses such as the consumption function, imports, exports and so on.

      Then there are constraints. Your model needs to satify those constraints. These are accounting identities.

      For example, in a closed economy, you cannot have both the government and the private sector having financial surpluses.

      If you do not add these constraints in your system of equations, your model will lead to nonpossible states of the world such as both the government and the private sector running surpluses.

      So ΔH = G - T is not analogous to the exponential decay or something. It is just a statement of identities.

      In the simplest model SIM, ΔH cannot be anything but G - T.

      By definition accounting identities cannot involve a behavioural parameter. Else they cease to become accounting identities.

      So there's no place for parameters in

      ΔH = G - T.

    10. ΔH = G - T is defined as an accounting identity in the model, but it represents a model assumption.

      G - T is government spending minus tax revenue (the budget deficit). H is "high powered money". It is not an identity, but rather the model assumption that high powered money is equal to the budget deficit. This is not true in all models (like say Y = C + I + G + NX, which is an identity). Additionally, this assumption has an implicit time scale for the rate of adjustment (the slope is given by the parameter alpha2, the second derivative follows from an implicit parameter -- equal to 1 -- in this equation).

      Maybe you can call it a definition of H. But it's not an identity.

    11. Jason,

      Why would it need to be an assumption if it is an identity, that is, it is always explicitly true?


    12. Hi, Henry!

      It is an assumption because it is not a universal model. It does not apply to France or Spain, because they use the Euro and have to rely upon the European Central Bank to create high powered money (H). That bank will not do so on the basis of (G - T) for either country. But the model does apply to the US, Canada, Australia, Japan, and most modern countries, which have central banks which do create high powered money on the basis of the spending and taxation of their govermnents.

      It is not really an identity, but within the model it acts like one. :)

  11. Jason, you write:

    "My original intent was to show how these could be related to information equilibrium..."

    I'd be great if that original intent could still be somehow salvaged, perhaps in a future post? I'm very curious which way that may have gone. :D

    1. Alternatively, we could ignore Post Keynesian economics entirely and probably be the better for it...

      O/T I've been trying to wrap my head around critical realism and I found this, which seems to be pretty interesting. As I understand it now (which is admittedly not very well), critical realism suggests that there are multiple valid interpretations of 'reality' (which exists), which apparently has some implications for qualitative research.

      From the negative reaction that you and Noah Smith seem to have toward it, I think I may be missing something. I can see how a positivist wouldn't like it, simply because it's not positivism (which is why I find it strange), but I still don't quite understand what its alleged implications are...

    2. Ha! But regarding realism, this:

      scientific realism is the view that theories refer to real features of the world. ‘Reality’ here refers to whatever it is in the universe (i.e., forces, structures, and so on) that causes the phenomena we perceive with our senses

      is limiting. The approach is supposed to be opposed to "positivism", but positivism is not too far off from "effective theory" -- e.g. you don't really have pions, you have quarks and gluons, but it works (and you get the right result) if you think of pions.

      The thing is, you need both approaches. It is helpful sometimes to consider aspects of your theory to be "real". I used the example before that a Taylor rule "is" a central bank; if a Taylor rule calls for negative nominal interest rates and your central bank can't ... then the Taylor rule must be restricted to positive nominal rates and the implied negative rates are actually nonsense, not an indication of how high you need inflation to go. That's using realism.

      But sometimes you need effective theories -- so all of the critical realism arguments against positivism are basically bunk.

      ... but at the end of the day, philosophy isn't going to get you results. Galileo didn't care about the philosophy of science, he just did it. So a lot of philosophy of science is not relevant to science itself.

      I had an argument on this blog about Popper. What I understood about Popper I considered to be useful -- if you have a theory, you better have a way to test it. I actually use that in practice. But what Popper actually said is pretty wrong about how science should work -- there is no binary true/false to theories (e.g. effective theories are sometimes true).

      So sometimes positivism is good, sometimes realism is good. What makes them good is whether they are consistent with results. And you usually get the results before you understand your approach as positivist or realist -- so they're more ex post labels than ex ante methodologies.

  12. "By definition accounting identities cannot involve a behavioural parameter. Else they cease to become accounting identities."

    It seems Jason has difficulty dealing with this notion?


    1. "By definition accounting identities cannot involve a behavioural parameter. Else they cease to become accounting identities."

      What's actually happening is that what is being called an accounting identity contains an implicit "behavioral" parameter that controls the rate of approach to the steady state.

  13. Ramanan: "The error everyone is making here is to think of ∆ as the differential operator in differential calculus."

    Pas moi.

    1. "The error everyone is making here is to think of ∆ as the differential operator in differential calculus."

      The delta is basically the (backward) finite difference operator:

      because $\Delta X = X_{t} - X_{t - \Delta t}$.

      Ramanan is making the error that it's somehow independent of the differential operator.

  14. Jason, re: "Update 4 March 2016, the second"

    ΔH = Γ*(G - T)

    I confess I don't get it. So say the sample period is Δt, and assume an all cash economy with no central bank: then isn't G the total amount the government printed & spent over Δt and T the total amount the government taxed & shredded over Δt? If so I don't see how the ΔH (the total change in the amount of cash in the economy over Δt) can be any more nor any less than G - T (assuming a closed economy).

    1. I'm assuming "high power money" in my paragraph (H) is total cash in the economy. But maybe I should shut up and go read G&L... (damn, this just doesn't seem like it should be that difficult... I must be missing something)

    2. ...but if I'm correct in my assumptions, and ΔH, G and T are all defined to be over one sample period Δt, then ΔH = G - T should hold regardless of the magnitude of Δt, no?

    3. If Γ = 0.5, and G = 100 and T = 60 over sample interval n, and if ΔH = Γ*(G - T), then ΔH = 20. Where did the other net 20 that the government created go? Because they printed & spent 100 and taxed & shredded 60, so the net cash created was 40 over sample period n.

    4. Tom,

      Makes sense to me.

      One thing. Ramanan, in his blog, makes the point that H is the stock of money at a point in time. However, (note, using (D) for delta) (D)H is the change in the stock of money over the time interval. So (D)H = G - T is dimensionally sound. So this does not appear to be a flaw on these grounds (and Ramanan says Jason says this is a flaw on these grounds). So Ramanan seems to have missed this (but goes on to argue that the flaw doesn't exist on other grounds anyway).


    5. Reading the paper (pg. 69 - 71), it seems the government prints & spends $20 each period forever, starting at period 2. In steady state they bring in taxes of $20 each period as well, thus the deficit goes to $0 and the total government debt goes to some steady state value. Is that correct?

    6. Tom,

      "Makes sense to me."

      I was referring to

      "..but if I'm correct in my assumptions, and ΔH, G and T are all defined to be over one sample period Δt, then ΔH = G - T should hold regardless of the magnitude of Δt, no?"

      Not sure about the gamma business.


    7. Henry, no Γ in that quote from me you give. Only in my next comment, where Γ = 0.5 seems to leave some money missing in period n: some printed money fell in the shredder before spending it and some collected taxes didn't get shredded.

    8. ΔH = G - T is effectively a definition of high powered money. It says the budget deficit (spending minus taxes) is the change in the stock of high powered money.

      This is not an identity. High powered money is not just some re-labeling of the budget deficit unless you define it to be so. You can define it to be that. You can also define it to be half the budget deficit. That just makes it take longer to reach the steady state.

      The one "accounting identity", Y = G + C (actually also a model assumption since there is I and NX or whatever else you define to be part of output) is never violated.

      If Γ = 0.5 , then the only equation that is violated is

      ΔH = G - T

      Which is the equation we put Γ in!

    9. OK, but in a closed economy (no foreign trade) that seems to me to be a perfectly sensible definition, equivalent to the sum total of cash in circulation (H). You could count less than this total by applying a gamma factor to each Delta H, but off hand I can't think of a good reason to do so... But then what do I know? Not much. ;D

    10. I should say it makes the total gov debt = all cash in circulation = H = high power money.

    11. "I should say it makes the total gov debt = all cash in circulation = H = high power money."

      That is the definition of high powered money in this model. But you could also make Γ > 1. I think this is how these PK models eliminate the "money multiplier" -- by fiat, but calling it an accounting identity.

      Note there is no accounting constraint on the stock of H in the model -- only its change (see the accounting matrix). It's level is part of the "behavioral" equation (with the alphas).

    12. I don't know what equation is stopping us from having Γ > 1 or Γ < 1 -- besides the equation with Γ in it.

    13. "Note there is no accounting constraint on the stock of H in the model"

      Yes. They're assuming paper and ink are free. :)

    14. "ΔH = G - T is effectively a definition of high powered money.............

      This is not an identity."

      OK, I think I can see that now.

      Then what is the significance of (D)H in the model?

      I presume it goes on to affect interest rates? then via that mechanism other macro variables?

      Haven't looked at the model closely yet.


    15. Henry -- in a more complex model, you could add in interest rates etc (as they do).

    16. ∆H = G − T

      describes how most modern government banking systems are set up. Europe is a major exception, since the European central bank does not create high powered money according to the deficit of any nation. I do not know why any government would set up a banking system so that

      ∆H = (G − T)/2

    17. Ah! From p. 66:

      "When government expenditures exceed government revenues (taxes), the government issues debt to cover the difference. The debt, in our simplified economy, is simply cash money, which carries no interest payment."

      OK, so H in this model is simply cash. Then to make the accounting work we have to say that any cash the government has on hand at the end of a period is subtracted from T for that period and added to T for the next period.

  15. There are implicit assumptions in any economic model. Economics wouldn't be categorized as a "social science" if it could be proved with mathematics and quantifiable facts. I know it can be difficult for a physicist or mathematician to come to grips with this reality, but it is the unfortunate reality of a world where the interactions of the different variables are dependent on unknowable behavioral effects.

    Accounting, obviously, relies on the same sorts of assumptions. The SFC model is strong not because its users don't make assumptions, but because they understand the limitations of the assumptions. Jason Smith, like Noah Smith before him, has proved that he doesn't understand the model he is attacking. Neither of them bear the credentials or expertise to be discussing the things they are so aggressively attacking and as a result these discussions are leading only to more confusion and not the clarity that good scientists should be seeking.

    1. I have no problem with implicit assumptions.

      I have a problem with implicit assumptions that are called "accounting identities".

      How do I not understand the model? I worked through it above and got the same answers as in Godley and Lavoie.

      And once it's been reduced to a numerical simulation, as a physicist, this is my area of expertise. And my expertise says that there's a time scale coming from the size of the time step -- which means that human constructed units like months matter.

      In any case, information equilibrium is a much better approach!

    2. Jason,

      No one who understands "accounting identities" within the context of SFC models misunderstands the points you're making. You've said nothing that isn't already known. Instead, all you've done is prove that you don't understand the context in which SFC models are utilized.

      Accounting identities require context. Your model is no different. You seem to think you've come up with some model that is superior to a time tested model, however, your model has never been tested in the real world and no one knows how its results would play out. So far, all we know is that you've badly misunderstood the context within which SFC is placed and attacked it even though you haven't fully studied it and by your own admission, don't even understand PKE.

      Rather than writing clever looking math down in a paper, why don't you detail your model's results and put some conclusions out into the world so we can see if this model bears any weight. Lots of people have come before you claiming that their mathematical model of the economy is "better". None have succeeded so far.

      So: what does your model say about the world at present and what are its relevant conclusions?

      Let's see if it can come close to equaling the same predictions that something like PKE has produced. I am curious, but not confident in any mathematical model as we've seen these "mathiness" models play out time and time again in economics with universally useless results. Maybe yours is different. I hope so.

    3. Does no one find it a little ironic that you're talking about PKE as if it's not heterodox. PKE and Information Transfer economics are in the same boat, PKE just happens to be older.

      I can see you making the argument that any theory that attempts to replace established theory must be very well empirically validated for neoclassical economics, but Post Keynesian Economics? Seriously? It's not as if anything Post Keynesian is anywhere near orthodox...

    4. "You've said nothing that isn't already known."

      Really? People using the stock-flow consistent models know that they introduced an implicit time scale anytime they couple a change in stock to a flow ... but don't care?


      "Let's see if it can come close to equaling the same predictions that something like PKE has produced."

      What predictions are those -- I'd like to compare against other models, but all I can find are DSGE models and various central bank projections. There aren't many quantitative predictions out there ...

      The IE model has been fairly successful:

      But yes, I'd love to see lines going through some data!

    5. "Does no one find it a little ironic ..."


    6. "People using the stock-flow consistent models know that they introduced an implicit time scale anytime they couple a change in stock to a flow ... but don't care?"

      Is this a joke? What's next? Are you going to inform the financial world that illiquid assets being marked to market have an implicit time scale?

      Have you taken basic financial accounting? You're misunderstanding some pretty basic assumptions about how financial accounting actually works.

      Please take a course in financial accounting before you demean an entire field of economics with such rudimentary thinking.

    7. I think you have confused "having units of time" with "having an implicit time scale".

      In the model above, there is a characteristic time over which the model achieves its steady state. That isn't "financial accounting", but rather describing the behavior of agents -- the speed of tâtonnement if you will.

      The speed of tâtonnement is set by fiat in the model to be dependent on the choice of time steps.

      Because you measure in months, tâtonnement takes several months to achieve the steady state. If you change to quarters, tâtonnement takes several quarters. If you change to years, tâtonnement takes several years.

      The time it takes to achieve the steady state should be a parameter in the model, not set by the size of the time step.

      I suggest you take a class in basic numerical methods -- your arbitrary choice of time step should not affect the outcome of a numerical calculation.

    8. "... should not affect the outcome of a numerical calculation."

      I will affect the accuracy of the simulation, but the result will not depend e.g. linearly on delta-t.

    9. It was Kalecki who noted that the long-term is nothing more than a series of short-terms!!!!

      Again, if you haven't understood the context within which the accounting is done then you haven't understood the model.

      You clearly aren't familiar with even the most basic assumptions in a SFC model. Good night, sir!

    10. If you don't understand the math, you don't understand the model.

      Accounting seems not to care about the calculus involved in stringing time periods together. Real analysis. Lebesgue measures. It's not trivial, and the long run is not just a series of short terms. That's what calculus is about -- and why it's different from addition and subtraction.

    11. "Accounting seems not to care about the calculus involved in stringing time periods together"

      Sorry that's complete bogus. You sound like Steve Keen.

      The way accounting is formulated -- see the SNA 2008 handbook for example -- it is completely consistent with all calculus involved.

      The error is in your analysis.

  16. Significance of choosing Δt appropriately:

    Starting with a continuous time model:

    w = pi/5

    x'' = (-w^2)*x

    discretizing with Δt = 10 gives x[n] = x[0], for all n (a constant system: i.e. no dynamics), but then you'd totally miss that it's actually a harmonic oscillator.

    1. This comment has been removed by the author.

    2. This comment has been removed by the author.

    3. ...probably no one cares, but I meant the above to be a cautionary tale of why Δt should always be an explicit parameter in the linear difference equation terms, and to caution that the dependence on Δt is not necessarily linear in Δt, but of course to 1st order it is:

      expm(A*Δt) ≈ I + A*Δt

      So my example is contrived.

  17. Jason,

    What a complete disaster. Sorry for being explicit.

    You perhaps do not know what accounting identities mean and what they are.

    Delta H in one period cannot be anything but equal to G minus T.

    In a mathematics/physics analogy, you are changing Mathematics itself. What you are doing is analogous to saying that the derivative of exp(x) is not exp(x) but some gamma times exp(x).

    With you equations,

    financial assets ≠ liabilities.

    I cannot understand why you refuse to see it.

    1. Ramanan,

      Do you accept the result that Jason has derived, i.e. the time (not expressed at model time intervals but in real time) to achieve stable Y is a function of the time interval used (at least that's what I think he is saying)?


    2. Sorry, my expression in the above post is clumsy and unclear.

      Jason argues that the model as it's presented in G&L produces the result that the time to achieve stable Y is a function of the model time interval used.

      I am asking if you agree with that result?


    3. Ramanan,

      ΔH = G - T

      This is not an equation that says assets = liabilities. It is an equation that says the change in assets = change in liabilities. It does not constrain the level of assets or the level of liabilities.

      But also, this equation has no information in the steady state (it says 0 = 0). It only applies during the the adjustment period.

      Tell me what is not balancing if I say

      ΔH = Γ(G - T)

      with Γ > 1. Don't say it is

      ΔH = G - T

      because I changed that equation.

      There is no accounting balance for the stock of H. H is defined in the short run by ΔH = Γ(G - T), but in the long run, it is defined by alpha2.

      You could say Γ measures the speed of tâtonnement -- agents finding the steady state.

      All Γ affects is the rate at which it approaches the steady state and the steady state is the same regardless of the value of Γ.

      I showed it in the two graphics at the bottom of the post.

      If Γ unbalanced the liabilities and assets, why are they then balanced in the steady state????? That steady state is independent of Γ!

    4. I added a picture to the bottom of the post above.

      How does the equality of assets and liabilities suddenly get restored in the steady state after being unbalanced during the entire adjustment period?

    5. Anonymous,

      That's because of his strange definitions.

      I choose a time period such as a quarter. Time lag is a function of alpha2.

      I choose another time period such as a year. I have to change the value of alpha2.

    6. Jason,

      It is true the equation talks of changes of stocks of assets but as a consequence of stock-flow consistency, if you are going to butcher the flow identity, you will end up violating the stock identity

      financial assets = liabilities

      as well.

    7. Hi, Jason. :)

      Pardon me for breaking in.

      "Tell me what is not balancing if I say

      "ΔH = Γ(G - T)

      "with Γ > 1."

      Under those condition the government is persistently increasing its holdings of cash. For instance, if in one period G = $30b and T = $25b and Γ = 1.2, the government will print $6b, pay out $5b of that, and sit on $1b, never spending it. Does that make sense to you?

    8. The dollar sign had unexpected consequences. I trust that it is clear enough what I meant.

    9. Sigh. I'm not with it these days.

      I only looked at the case where G > T. What if T > G? Will the government destroy not only the extra taxes collected, but burn some of its cash on hand? What if it does not have enough cash on hand?

    10. Well, I spent some time reading through the description of how the simple system works. Maybe I missed it, but I did not find out what happens when T > G. {shrug}

    11. Bill, T > G may not be possible. Here, you can try it out here (or work it out from the expressions I give).

    12. I take that back... it might be possible, but probably not with the parameters given (or any sample period with those parameters). ΔH never goes seems to go negative.

  18. A general comment.

    Your confusion lies in confusing Physics itself and you bring all that confusion into economics unfortunately.

    Seems you are uncomfortable setting c=1. Relativists do that often.

    c is not a fundamental constant like other fundamental constants. Nobody asks why c is what it is. It is what it is by the choice of units.

    This is unlike other constants such as the mass of the up quark for example.

    Strange mixing up of confusions in both fields!

    1. Physicists may set the speed of light, c, to 1, for convenience, but they do not say

      ∆E = ∆m

      they say

      ∆E = ∆m(c^2)

    2. Ramanan,

      "Seems you are uncomfortable setting c=1. Relativists do that often."

      Ok, that is starting to get insulting. I have a Phd in particle physics. I've been setting c = 1 since long before you ever took a derivative. You obviously don't understand what is going on if you think that is in any way relevant.

      I suggest you flip back and forth between the two graphs at the bottom of the post above until you get it.

      There is an implicit dependence on the time step. This is not dimensional analysis (so the analogy of units of c don't matter). This is numerical methods.

      Every time you couple a stock (or change in stock) to a flow you'll get a coefficient that depends on the time step. Just like alpha2! It's just that you're missing one in ΔH = G - T.


      No, that's not right. We would say ∆E = ∆m.

    3. Thanks, Jason. :)

    4. What I had in mind was, physicists do not claim that

      ∆E = ∆m

      even if for convenience they set c = 1.

  19. I went ahead and did this example myself in Excel. I was able to reproduce the same results shown in Godley-Lavoie with the parameters given. I wrote mine in the following format:

    x[n+1] = A*x[n] + B*u[n]
    y[n] = C*x[n] + D*u[n]

    With the following matrix dimensions:

    A is 1x1
    B is 1x1
    C is 4x1
    D ix 4x1

    x is dimension 1, and consists only of H
    u is dimension 1, and consists only of G
    y is dimension 4, and consists of the following column vector:


    I embedded an interactive version on a webpage. I only intended the four green cells in the upper left for people to edit. Refresh the page if you mess it up (you won't mess up the original). Looking at Ramanan's notes here and on his webpage, it seemed like he was saying that only alpha2 needs to be compensated for a change in the time step. I set the plot up like that by changing the plotted time step and alpha 2 simultaneously when the parameter Ts (time step) is changed. Alpha2 can be edited directly as well, but only a version which is "per year" (which I assume is the original time step). The version of alpha2 that's actually used depends on the time step and is printed in a white cell to the left, with the other used parameter.

    It doesn't behave as I expected from Ramanan's description. Most of the curves are invariant to changes in Ts, but the curve for H is not: it changes.

    Well, anyway, here it is: check it out for yourself:

    You can download a copy from the icon in the lower right on the black bar across the bottom of the spreadsheet.

    1. I am not sure what you are doing.

      My point is: Let's say you start with time periods equal to one quarter. Now if you say the consumption function is

      ... + alpha2 * W,

      and that alpha2 = 0.1

      All you are saying is that households' consumption is 0.1 of their wealth (in addition to consumption out of income).

      Now suppose you were to instead choose the time period of 1 year. Then you need to change alpha2 to 0.4 to arrive at the same conclusion (time path).

    2. Tom,

      "I went ahead and did this example myself in Excel."

      Same here except I used the good all substitution method to eliminate variables (don't know matrices).

      I derived exactly the same results as you.


    3. What I did was write the problem as a difference equation of one state (H) plus an input (G)

      H[n+1] = A*H[n] + B*G

      Both A and B are functions of alpha1, alpha2 and theta

      I treated the other variables (Y, T, YD and C) as outputs or "measurements" of the system, in a standard state-space type representation. The "measurement" matrix C and the pass-through matrix D all have elements which are also functions of alpha1, alpha2 and theta.

      What I'm saying is only a single feedback "state" is required to put this in a typical four matrix state-space representation.

      I did the algebra by hand to calculate the state space matrices (A, B, C and D), so I was pretty surprised when it worked the first time. Still there could be an error.

      @Ramanan, I did exactly what you describe. The spreadsheet is interactive, so you can try it out yourself and see the result. (Change parameter Ts, which changes both the time label on each sample point in the table and it changes the alpha2 value (the alpha 2 rate is in the green box and won't change unless you manually change it). You probably have to download it to see the formulas in each cell.

    4. This comment has been removed by the author.

    5. A H, I wanted to follow Ramanan's instructions as best I could to give them a shot as is (i.e. only adjust alpha2). I'll come back and try to fix it later.... or give it a shot yourself (you can download my spreadsheet).

    6. ... and like I say, I matched G&L w/ default parameters, but perhaps I have two or more offsetting errors in there... so I could have got lucky(?). I'm tired of futzing w/ it now, but I'll give another look later. (One thing that makes me suspicious is no Ts (my time step parameter) dependence in my B matrix: normally you do an integration to obtain B when converting from continuous time)

    7. I added the text of the formulas for A, B, C and D

      (no attempt made to simplify)

  20. Ramanan,

    "I choose another time period such as a year. I have to change the value of alpha2."

    Recall Jason's ΔH = Γ(G - T).

    Let <> signify an index number.

    Now also recall that in the G&L model there is the term a<2>*H<-1> (a = alpha)


    H<-1> = H - ΔH


    a<2>*H<-1> = a<2>(H - ΔH)

    and also

    ΔH = G - T.


    a<2>*H<-1> = a<2>(H - [G - T])

    = a<2>*H - a<2>(G - T)

    You say when you change the length of the model time interval you also change alpha2.

    So allow a factor F to modify a<2>, hence

    F*a<2>*H<-1> = F{a<2>*H - a<2>(G - T)}

    So there is term F*a<2>(G - T)in this equation

    which looks like Jason's Γ(G - T).

    Could you comment on this?



  21. Ramanan,

    Further to my last post above it seems you and Jason are pretty much saying the same thing in terms of modifying the model to take account of changes in the model time interval length. So it seems to me Jason has one on you there.

    However, I also think Jason has stirred up a storm in a tea cup over this. The model in the book is generic and designed for pedagogical purposes so I can't see Jason winning his first Nobel laureate over his "grand" discovery of the "flaw".

    1. Winning a Nobel prize in economics for discovering a flaw in heterodox economics? LOL.

      The same flaw appears in the more complex models, BTW. Any time you couple a change in stock to a flow you will get an implicit dependence on the time step. It's just easier to see in the pedagogical model.

      At least for some people.

  22. Anonymous,

    "So it seems to me Jason has one on you there."

    Ha! He is butchering accounting identities and asserting they are behavioural things. How confused can that get!

    There's no modifying the model! What needs to be modified is Jason's thought process.

    Didn't really catch you in detail about your previous comment

    ΔH = Γ(G - T).

    No. No. No.

    ΔH = G - T

    The important point is about converting continuous time variables to ones with intervals (ie with difference equations and periods).

    In continuous time government expenditure is a rate. Call it


    You multiply it by the time interval Δt to arrive at the difference equation variable G.


    ΔH = (G_continuous - T_continuous)Δt

    or ΔH = G - T.

    1. In continuous time, government expenditure (G) is almost discontinuous, isn't it? A single financial transaction is completed in a very short time. It is not that we multiply G_continuous by ∆t to get G. We get G by measurement. Or, in a hypothetical, by assumption. If we want to call G/∆t G_continuous, that's OK, I guess, but we start with G and derive G_continuous, not the other way around. Right?

    2. Well continuous time formulation is also an abstraction.

      So instead of saying the government spent $1bn in the morning, $45bn in the afternoon, we say the government is continuously spending at a rate.

    3. Ramanan,

      "Didn't really catch you in detail about your previous comment

      ΔH = Γ(G - T).

      No. No. No. "

      You don't think my bit of facile algebra has any merit? It looks pretty clear to me.

      "In continuous time government expenditure is a rate."

      We're not talking about a continuous time model here. We are talking about a discrete time model.

      You're slipping off the argument plane. Please stick to the argument.


    4. Where do I slip.

      But if you want a more careful answer here it is:

      Sorry for multiple posts, but threaded comments can be confusing for everyone... Just in case you don't miss:

    5. "Where do I slip."

      We're not talking about a continuous time model here. We are talking about a discrete time model.

      I check you're blog.

    6. Ramanan,

      "Ha! He is butchering accounting identities and asserting they are behavioural things. How confused can that get!"

      I do think Jason has a problem with a penchant for converting identities into behavioural functions.

      But in this case I'm not so sure.

      ΔH = G - T sort of looks like an identity but then not. It sort of looks like a behavioural function but not really.

      It seems to me it's an equation which links the two parts of the G&L model. It's an equation which links the flow variables to the stock variables.

      If it was an identity it would perforce have to be part of the model but I'm not so sure this is the case. The stock part of the model might exclude money all together.

      It seems to me that it is closer to Jason's description of it an as assumption of the model.

      That being the case I don't think the storm in the tea cup that Jason has raised defeats the model. He has raised a reasonable technical point about relating flows to stocks and the impact that has on the characteristic time to steady state. And you, by admitting you make adjustments to a<2>, are doing just what Jason is saying should be done to compensate.

      I think you are both getting your knickers in a twist about not a great deal.

    7. Henry, "And you, by admitting you make adjustments to a<2>, are doing just what Jason is saying should be done to compensate." Except I don't think that works, but I think I know what does work. See here.

  23. If ΔH = G - T had instead been written as , say , J = G-T , where J is the flow equivalent of the change in stock H ( i.e. J, G , and T would be compatible in units and time scales ), would this issue still have arisen ?

    FRED will get you to G-T in multiple ways :


    1. Yes because in general you need coefficient out front that depends on the time scale ... ΔH = Γ(Δt) J ... in order to make it work.

      I think a really good analogy comes from circuits. The basic equation is that voltage equals current times resistance: V = i R.

      Voltage is a (change in) "stock" -- all voltages are measured relative to some other voltage. Current is a flow (of electrons).

      What we have in the model above is the statement

      V = i

      declared to be an "accounting identity". But that means resistance R = 1 always. So an RC circuit has a time constant (controlling the rate of approach to the steady state) that seems to only depend on the capacitor C ... τ = C, when in fact it depends both R and C: τ = RC. There is an implicit dependence on R because you set R = 1 and it appears to disappear in the equations.

  24. "What mysterious force jumped in to restore the equality of assets and liabilities at the steady state?"

    Assets and liabilities are always equal there is no force necessary to make this so. This statement is not even wrong and shows a deep ignorance of accounting.

    1. Hello AH,

      What happened was that I changed an equation that (according to Ramanan above) kept assets = liabilities. So A ≠ L, right?

      But as I show in the two graphs at the bottom of the post, both A ≠ L and A = L versions of the model lead to the same steady state -- and the only change in the model is the rate of approach to that steady state.

      So does one steady state have A ≠ L and one steady state have A = L? But they're the same steady state.

      How can that be?

      Did a mysterious force jump in an make A = L in the A ≠ L model? Not really, that was a rhetorical question. What really happened is that the equation I changed -- that Ramanan says makes A ≠ L -- isn't really an accounting identity. It just controls the rate of approach to the steady state.

      And because it controls the rate of approach to the steady state, it represents an implicit time scale -- a buried model assumption passed off as "accounting". That's why you have to change all the flow variables and some of the coefficients if you change the time step Δt from quarters to years in order to keep the equation ΔH = G - T invariant -- because there is an implicit time scale in ΔH = G - T. You could just put it in there explicitly and you wouldn't have to change anything if you changed Δt. But ΔH = G - T is an "accounting identity", so you have to jump over backwards to keep that equation as it is. Even if it means changing Y = C + G, I guess.

      I will say this is not accounting. This is a classic error in numerical methods where the calculation has a hidden dependence on the time step Δt because of an equation is missing an explicit time scale (to cancel the implicit dependence). As Δt goes to zero, the only change should be that your calculation should get more accurate.

      It probably happened because there have been only business school, and accounting grads using the model. Apparently no physicist has ever looked at it!

      It's not that it can't be saved; you just have to add in a couple of timescales here and there. But being adamant that

      ΔH = G - T

      is an "accounting identity" rather than a dynamic equation is going to hold you back.

    2. Your error is confusing what the quantities you have is.

      " A ≠ L and A = L "

      Perhaps you are not trained in finance and you cannot see the absurdity of the statement. As I said, it's a bit like inventing some mathematics where the derivative of exp(x) is not exp(x) and claiming something something.

      How can financial assets not be equal to liabilities?

      I mean how do you explain how these two are not equal. Is the claim on someone on Mars?

      The confusion you have is that you cannot translate from continuous time to discrete time well. Your analysis is a confusion about what these quantities mean.

      What you derive is just

      ΔH = (G_continuous - T_continuous)Δt

      And nothing else!

      Your entire post is pure tautology which you claim is having discovered something new.

      Wrote you another:

      And a slightly related post/quote:

    3. " A ≠ L and A = L "

      I don't understand what the fuss is about.

      There is only one asset, money. It's an asset in the hands of the consumer. It's also a liability of the government. They are precisely offset, always.

      What am I missing?

      Can someone please explain what the fuss is about?


    4. Ramanan,

      Your posts are incorrect.

      See my update to the post above.

      Simply passing through to continuous time "by inspection" without working out the $\Delta t$'s is an error. You have to be careful about the limits. I worked it out above.

  25. Jason,

    Bonds don't finance anything. They are just an asset swap. Previous *spending finances bonds* - the money was issued in the first place. You have your causality the wrong way round.

    In the UK BoE offers *unlimited intra-day overdrafts* and nobody operating those accounts needs to co-ordinate the balances until the end of the day. That's how it works.

    The Treasury spends asynchronously and with no direct relationship with the actions of the debt management office or the bank of england.

    What the debt management office does is maintain the reserve balance in the Treasury accounts at a set value - entirely for the sake of appearance.

    There is no control function here, it is pure politics. Government cheques don't bounce because there is nobody with the capacity to do that. If Treasury decides to spend, then the cheque clears.

    Dynamically it is better for all instrument sellers in the system for the overdrafts to be at their highest points *before* they start offering their own instruments into the market. Since that would guarantee the best bid.

    What I'm saying is that the DMO does what it can to get the best deal for the Treasury, and the best is likely to be achieved if it offers after it believes those overdrafts have expanded - since there will be more bids in the market and they would get a better price.

    I would expect serious treasury department in any organisation to know when they get the best price for their placements.

    So I would suggest it irrational behaviour for a treasury department of any organisation to pre-fund if they can get away with it.

    Choosing bonds or interest on reserves is just an asset swap and has *nothing* to do with 'funding' government spending.

    The key analytic technique that MMT uses that sets it apart from most others, is that it uses a consolidated government sector in its analysis. This allows it to cut through the obfuscating political constructions between the various government departments and institutions and concentrate on the essence of what is happening.

    This is entirely consistent with accepted accounting practice, using a technique known as group accounting - which produces consolidated financial statements (income, balance and cash) amongst a related group of entities. The international accounting standard for that is IFRS 10 'Consolidated Financial Statements' which requires that entities under common control present a consolidated set of accounts so that external users can obtain a 'true and fair view' of the actual underlying economic transactions.

    The Central Bank in all sovereign jurisdictions falls under the definition of control by the Treasury - often de facto by the operation of law (Bernanke: "Our job is to do what Treasury tells us to do"), but also de jure, e.g in the Sterling area HM Treasury actually owns the entire shareholding of the Bank of England.

  26. I was following -- on and off -- the discussion, largely between Ramanan and Jason. It was a loooong discussion, too: about 35 hours -- wow! -- from the moment John Handley posted the first comment (March 3, 2016 at 7:31 PM) until the moment another Anonymous (March 5, 2016 at 5:44 AM) nailed it:

    So it seems to me Jason has one on you there.

    However, I also think Jason has stirred up a storm in a tea cup over this.

    Those are words of wisdom.

    Jason is, of course, right, as that other Anonymous very correctly wrote. There is a problem with SFC models.

    But it's not a fatal problem by any means.

    Moreover, if I understand the SFC thing, those models are not meant to be quantitative, but merely qualitative, representations: for example, it is not so much how long it takes for the system to reach steady state that matters, but whether it reaches steady state at all. In other words, when interpreting charts like the last two Jason added, the idea is not to say "the system will reach steady state precisely 61.91022 time units into the future" and set the chronometer watch. The idea is that it will reach steady state.

    At the other hand, the post Keynesians really got in over their heads in this discussion. They, like, went totally bananas. Too much passion/attitude. You urgently need to shed the tendency to hand-waving, stubbornness, and posturing.

    Read more, read better. Think.

    If this post revealed a serious problem with post Keynesians it was that.


    I loved this:

    "oh no! assets <> liabilities" "suddenly assets = liabilities joy!" :-)

    1. "but merely qualitative, representations: for example, it is not so much how long it takes for the system "

      That's quite inaccurate. If you see G&L's book, there's a mean lag theorem with mean lag derived as a function of alpha1, alpha2, and theta.

      "Jason is, of course, right, as that other Anonymous very correctly wrote. There is a problem with SFC models."

      If you want to think of a world in which accounting identities are not satisfied, then there indeed is a problem. But only in the imaginary world.

      It's really important to get the accounting right. Most economics concepts is all national accounting.

      It's fine if you think that economics should be studying it another way but if you are talking in the language of economics, you do not butcher accounting. Everything, GDP, output, production, consumption, investment, balance of payments .... they are all accounting concepts.

    2. Whatever you say. Keep discussing with Jason!

    3. @ Anonymous:

      "But it's not a fatal problem by any means"

      YES! I agree! They just need some extra coefficients. But I think it spoils the "accounting identity" purity of the model, so that's probably why there is some reluctance.

    4. But was the model even 'pure' in the first place? I mean, I'm sure SFC is just fantastic, but who should care if the behavioral assumptions are ridiculous... Not to mention the lack of AS... Well, that's just my two cents anyway.

    5. Where did a PK economist ever say the SFC model was "pure"? Jason and the other commentators here have created a strawman based on their own misunderstandings of SFC and PKE.

      All economic models have limitations. A supply and demand model, the most basic model of economic transactions, has its own temporal and behavioral problems. Good economic scientists don't use models because they think they're "pure" models of the economy. They know they are imprecise guides for potential future outcomes. This is why math is pretty useless for a lot of economic models. It provides a false sense of certainty over an inherently uncertain model.

      This big kerfuffle was stirred up by physicists misunderstanding basic assumptions of economic modeling.

    6. We have Ramanan and others reacting to Jason's piece as if he had wrought an existential threat to PKE and the G&L model, which he hasn't.

      We have Jason thinking (well at least initially) he has brought PKE to its knees, and he hasn't it.

      It seems to me it's Don Quixote1 meets Don Quixote2.

      Best we all retire to La Mancha.

    7. Where did a PK economist ever say the SFC model was "pure"?

      I was only suggesting "purity" was a possible motive for why these PK people seems to refuse to believe there are implicit time scales in SFC models. It could be that PK people don't know what they are talking about. It could be both. It could be Dunning-Kruger. It could be just ordinary stupidity.


      All economic models have limitations.

      This is not a limitation. It is a math error.

      It is fixable. But no one seems to want to fix it.


      This big kerfuffle was stirred up by physicists misunderstanding basic assumptions of economic modeling.

      Not really. This is a mathematical error.

      Trying to argue from "authority" is garbage. If economists don't understand the problem and think this physicist is just misunderstanding the economics, then the economists are just dumb.

      Show me why I am wrong. Don't give me BS about not understanding econ. If this is the way economists use finite difference equations then all of economics is wrong.

      But it isn't -- at least for mainstream economists. They don't use these SFC models. So maybe those that use SFC models don't understand mathematics and economics.

      Why, if I change the time step, does the supposedly economic process happening in real time happen faster?

      What possible economic argument would lead to the result that if we re-labeled the calendar to have 24 half months (January, Sjanurary, February, Sfebruary, March, Smarch, ...) it would change recessions to be shorter?

    8. Some of the most brilliant mathematicians in the world devise VaR models for use in financial modeling. These models sometimes use standard duration analysis resulting in significant "math errors". This doesn't mean that VaR models are "wrong".

      What we do know about VaR models and other heavily math dependent models is that anyone who is overconfident about their results doesn't understand how imprecise the financial world is. We got the Great Financial Crisis thanks to overconfident mathematicians like you. This is one reason why Post-Keynesians are very cautious about relying too heavily on math based models of the economy. They lead people to think they've solved something that's unsolvable.

      Your whole model fails the Lucas Critique at the most basic level. This is something you can never overcome in a math based model and it's why math based models have always come under attack in economics. PK economists realize this which is why you're acting so defensive here.

    9. "So maybe those that use SFC models don't understand mathematics and economics."

      C'mon Jason, you're using the argument they're using against you and the same argument you are railing against.

      Chill. :-)

    10. ...resulting in significant "math errors"

      A model assumption on its own is not a math error. A model assumption that makes the answers depend on the arbitrary size of the time step in a numerical simulation is a math error.


      Your whole model fails the Lucas Critique at the most basic level.

      Not really. Check it out:

      But the rest of your paragraph after that sentence is a word salad. Lucas wasn't criticizing math. He was criticizing using statistical regularities to build models -- because those regularities could fail if used for policy.

      Lucas was in favor of a very mathematical approach -- microfounded rational agent models.


      C'mon Jason ...

      Ha! You're right. The sarcasm doesn't seem to come across as well in writing. I stopped really caring about whatever these people have to say and have just verbally sparring. They aren't going to listen. I feel satisfied that after looking into the issues brought up in the first few comments and checking my work that it's not me who's refusing to see reason.

      Heck, I was excited to try an write out a SFC model as an information equilibrium system! I might still be able to figure out how to do it ... but adding the explicit time scales to make it work out mathematically will upset the very people I'd be trying to convince it's a good idea.

      C'est la vie.

    11. "Heck, I was excited to try an write out a SFC model as an information equilibrium system!"

      I'm still for not doing this ever, if that makes any difference...

  27. Jason, check it out, I think I figured out how to make SIM invariant to the time step (which I call the sample period Ts):

    The secret: I convert it back to a continuous time system, and then from the continuous time reconvert it to a different sample time. The mechanics are in the cells to the right of the chart (you have to use the horizontal slider to see them)

    Basically I do this:

    a_cont = ln(A_orig)/Ts_orig
    b_cont = B_orig*a_cont/(A_orig - 1)


    A_orig = expm(a_cont*Ts_orig)

    The "orig" subscript means the original parameters for alpha1, alpha2 and theta. The "cont" subscript means continuous time:

    H(t)' = a_cont*H(t) + b_cont*G(t)

    Where G(t) = 20 for all t > 0

    1. Also note that in this case expm() = exp() since there's a single state (i.e. matrix exponentiation is the same as scaler exponentiation)

    2. There are some small discrepancies, but I think that's due to G being zero for one sample period at the beginning, thus when it finally comes on line, it's not perfectly the same for all values of Ts. I can probably fix that so it's exactly the same (on sample times that are the same) with a little more work.

      And of course higher sample rates will result in more interpolated points between the ones shown for a lower sample rate (even if they correspond exactly on sample times that happen to coincide).

    3. Takeaway message: you'll not that Both A and B change when Ts is changed, yet the original B (B_orig) has no dependence on alpha2_orig... thus only changing alpha2 for a change in Ts will fail to change B appropriately. B *SHOULD* have a dependence on Ts.

      Unless you can write the whole thing some other way... that I didn't think of. Still, it appears that the story that alpha2 need only be scaled to match a new Ts is not the full story. ... that is unless I did something very stupid (definitely not out of the question!).

    4. Yes, there will be some problems -- changing the time step will change the relative accuracy of the problem and the sudden jump in G requires an infinitely small time steps to get right.

    5. In the description below the new spreadsheet I've now added explicit formulas for A and B in terms of Ts, Ts_orig, alpha1, alpha2 and theta.


    6. "Yes, there will be some problems"... right, but you can minimize them by using smaller steps. I put a 1000 time samples in my table, so I can go down to Ts = 0.1 years without losing information on my chart (the chart goes out to 100 years). Then you can see there's very little difference between Ts=0.1 or Ts=0.2, etc.

  28. Just curious, how many people who commented here (besides Jason and me) are not Post Keynesians?

    1. I would say I'm not. But then I'm not anything! I used to follow Cullen Roche's blog (pragcap) pretty regularly, and I even made a blog about bank accounting he still links to under his "Education" pull down. I don't know anything about bank accounting really, but I learned something about it while absorbing his material, so it was useful for answering questions from other readers.

      So back when econ first struck my fancy, I did some reading about MMT, Steven Keen, post-Keyneisans, and "Monetary Realism" (which is the offshoot of PKE / MMT that Roche calls his thing).

      But I can't claim to be any kind of expert. I always deferred to folks like Ramanan, JKH and AH (some of whom comment above).

      I guess I finally wandered away when I realized that people like Scott Sumner and Nick Rowe aren't as stupid as *some* of the PKE crowd make them out to be (not necessarily anyone commenting here). They may still be stupid (or just "wrong" is probably better), but it's certainly not for me to judge.

    2. I hardly think Rowe and Sumner are representative of mainstream economics...

      But, otherwise, this entire comment section (with the exception of reasonably agnostic Nick E) has been a giant argument over something almost entirely inconsequential.

      I forget who said it first, but whoever did is right; the flaw Jason in pointing out is insignificant, but all the PK people are blowing up anyway. It really stinks of dogmatism...

    3. John, if you get a chance, check out my second spreadsheet and the description below and see if you agree. The upshot: it appears scaling alpha2 is not the right approach to making SIM invariant to the time step (sample period Ts in my sheet). I demonstrate a different method. You can try changing Ts (and other parameters) on the embedded spreadsheet to check it out w/o messing up my original.

    4. Re: Row and Sumner: not just them, but some in the PKE or related crowds go after everyone, including Krugman, DeLong, Cochrane, etc. Now I don't have a problem with them going after those folks, but when I first noticed this it seemed that the whole world of "mainstream" econ must be incredibly stupid! They don't know how banks work, or what accounting is, etc. I eventually realized that's not really the case, so I can see how someone like Rowe, presented with the ravings of a neophyte admirer of PKE must roll their eyes. Nick always told such people "Please, for the love of God, just go read an intro econ text!" (paraphrasing). Lol. Well... I still haven't done that! Lol... I find it easier to be humble instead. Note that I'm not accusing any of the commenters here of that... if anything I've seen them do well to "police their own" so to speak: pointing out gaping holes in Keen's or MMT analyses, and what not.

      Frankly I've never really dug into the details like this before, except once when I did a similar interactive spreadsheet for one of Nick Edmonds' models. At the time I didn't spend much time thinking about what was in the model... it was more a challenge of seeing if I could get the free version of online MS Excel to actually solve it (complete with multi-dimensional Newton Iterations!... which may not have been actually required, but that's essentially what Edmonds was doing, though he didn't know it). Nick E. wasn't sold on the idea (I thought it'd be cool if he could make his models interactive)... and probably wisely so. Lol. (It was a much bigger pain in the ass than I'd hoped it would be).

    5. "I forget who said it first, but whoever did is right; the flaw Jason in pointing out is insignificant, but all the PK people are blowing up anyway."

      Yes, this is true; it is insignificant, mathematically. And I said as much. Several times.

      But I think the reason it's blown up is that the change required to fix it really goes against the philosophy of PK and SFC analysis. There's a degree of freedom that they want to stamp out -- it seems to effectively be a "money multiplier", something that is anathema to PK. I mention it in the original post. The equations as they lay them out are "accounting identities". Saying "X = Y + Z is an accounting identity" is qualitatively different from saying "X = k(dt) (Y + Z), and as Post Keynesians we take k = 1 for dt = 1 quarter".

    6. Jason, if I'm correct, the change required to fix it isn't what Ramanan suggested either (scaling alpha2 by the ratio of the change new sample period to the old). The solution is quite different than that (if I can claim to have found the solution). Scaling alpha2 doesn't appear to work.

    7. Ah, I didn't see you had up another update... perhaps you're saying something similar...

  29. I agree with Nick Rowe (directed at everyone here): please, for the love of God, read a mainstream econ textbook.

    Well, I do have to admit that I've never actually done this either, but Noah Smith is right; there are slides everywhere and they are extremely useful. Otherwise OLG is pretty cool, but NK is called the mainstream macro workhorse for a reason.

    Another confession: I learned RBC before I learned IS-LM. Honestly, most basic macro models (except for Solow-Swan, which is awesome) are unnecessary (read useless); all you need is NK and a basic understanding RBC and monetary economics.

    Basically, if you are somewhat competent in calculus (or, like I was, are willing to teach yourself in order to understand the models), you should just learn the building blocks of NK -- namely, monopolistic competition and RBC -- and you'll be pretty competent.

    Honestly, my gut reaction to PKE is the same as Noah Smith's: kill it with fire! Which is why most of my comments on this post are borderline mocking...

    1. "Well, I do have to admit that I've never actually done this either"

      If you didn't confess that I was going to call you out! Lol (per our convo on your blog some weeks back)

    2. I can say one thing for my economics-learning-method: undergraduate courses will be mostly new to me, even if (though) they are useless...

    3. Or maybe just a numerical methods book?

      Seriously, this is just a case of a calculation that isn't stable to reducing the time step -- but the impasse is an equation that is called an "accounting identity", so I guess it must be about economics. Ugh.

    4. I'm really more concerned with, y'know, the fact that PKE exists in the first place than whether or not they understand basic numeral methods, which is why basic macro (or, even better, micro -- so they'll learn why we need to model supply) textbooks are so important.

    5. I just realized that I put "Otherwise OLG is pretty cool, but NK is called the mainstream macro workhorse for a reason." in the wrong place.

      It goes at the end of the next paragraph.

  30. Jason, this may be your all time record holding post for number of comments.

    1. But, if you don't count all the the arguing, I think the post is average...

  31. The above discussion inspired me to look up this funny post by a mathematician by the name of Matheus who apparently had had some negative feedback from some PKEists... (there's just a few funny bits, but they stand out).

  32. Jason, in your 5th (and final) update, you write:

    ΔΔH = ΔG + ΔT = ΔH

    I'm probably just giddy from looking at this post too long, ... but I'm just not seeing where it is you're getting the

    = ΔH

    part. Can you clue me into which of the lines above that one leads to the "= ΔH" conclusion?


Comments are welcome. Please see the Moderation and comment policy.

Also, try to avoid the use of dollar signs as they interfere with my setup of mathjax. I left it set up that way because I think this is funny for an economics blog. You can use € or £ instead.

Note: Only a member of this blog may post a comment.