Friday, December 30, 2016

Chris Dillow, information equilibrium is the framework you're looking for

I think Chris Dillow makes a fine point at the end of his review of The Econocracy about the role of the media with regard to how macroeconomics is disseminated and perceived. I just wanted to add a bit to something he said earlier in his post regarding teaching economics:
For example, some important basic facts in macroeconomics are that: there’s no such thing as a “representative firm” (see this great paper (pdf) by Nick Bloom and colleagues); that GDP growth in developed nations is often stable but interrupted by occasional crises; and that recessions are unpredictable. The sort of theory that can account for these facts is very tricky. A shift away from equilibrium theories towards complexity, evolutionary models and agent-based modelling would require massive changes for which students and perhaps academics are ill-prepared.
I'd like to proffer the information equilibrium/statistical equilibrium framework as a theory that can account for these facts without being "tricky".

Information equilibrium is a type of equilibrium, but the underlying information theory (per the abstract of Fielitz and Borchardt's paper) "provides shortcuts which allow one to deal with complex systems" (and has some connections to evolution: here, here). It is a kind of first order analysis with a generalized thermodynamics that lacks a second law which seems to have a connection to recessions. The outputs of agent based models look like information equilibrium relationships.

There are two major regimes in the information transfer framework: information equilibrium and non-ideal information transfer. A lot of standard economics follows from the information equilibrium regime (it leads to straightforward supply and demand as well as a utility-like description). Market failures can appear in the second regime. In this regime, recessions can come from irrational or complex social factors (connected to that lack of a second law) as well as various kinds of shocks. This gives a general view of the business cycle in which recessions are likely unpredictable (like avalanches), but may have probabilistic indicators. However, this is not the only view in the framework (for example, you can understand the economy in terms of just labor and productivity) and the information transfer framework has the benefit of not assuming what a recession is in order to study it (like so many other economic frameworks do: here, here). You can even understand growth in terms of either the Kaldor facts or something that works better.

The "representative firm" is probably an emergent concept in information equilibrium like the representative agent. Additionally, the distributions of firm growth rates discussed in Bloom et al can be understood as the result of looking at ensembles of information equilibrium relationships. Macroeconomic equilibria can be understood as "statistical equilibria" in the economic state space ‒ distributions that are stable over time. The skewed distributions during recessions can be understood in terms of the non-ideal information transfer. The concepts of price stickiness and even stock market prices can also be understood in terms of this economic state space (all at the previous link).

Awhile ago, I put together an outline of an "Economics 101" course that could be taught using the information transfer framework. You might think the math is tricky, but overall it can be understood in terms of the regular supply and demand diagrams with a couple changes: there's another "solution" where supply and demand are tightly linked and move together, non-ideal information transfer turns the supply and demand curves into bounds, and to understand macro you might have to look at ensembles of diagrams (a picture from this link is at the top of this post).

Since this isn't throwing out all of ordinary economics and has a simple diagrammatic representation, it might stand a chance. And it keeps all the hooks for complexity and agent-based modeling. You can even use the framework to understand the classic microeconomic experiments (here, here) and do real forecasting making it more empirical in light of the "credibility revolution".

But one important aspect of the information transfer framework is its plurality. It's a framework, not a specific theory. While I have taken to writing my own models in this framework, there is nothing preventing you from writing down a New Keynesian DSGE model, a stock-flow consistent model, "market monetarism", or even understanding completely different schools of economics.

Information transfer economics: a year in review, and thanks

It's orbiting at 19 miles a second, so it's reckoned ... (from Wikimedia Commons)
Well, this year turned out to be rather soul-crushing for the reasons everyone is well-aware of, as well as for some personal reasons that I won't go into. One bright point was the blog, though. It's been a long journey ...


It kicked off nicely in January after Scott Sumner wrote down an information equilibrium model (and again a few months later), and I was invited to give a talk at BPE 2016 in DC (which I unfortunately had to drop out of because of my real job).


I wrote a series of posts that blended from production possibilities to evolutionary fitness (one, two, three, four). A more recent post could be considered a follow-up to those four.

But the most traffic was generated when David Glasner appreciated both my back and forth with a commenter on Nick Rowe's blog as well as my follow up.


In March, I kicked over a hornets' nest when I criticized Stock-Flow Consistent modeling. I also wrote down the most accurate yet simplest information equilibrium model to date that simply explains inflation and output in terms of labor and capital (the "quantity theory of labor and capital") for several countries.


The blog turned three years old at the end of April, and I celebrated with closing out several forecasts using the information transfer model that were fairly successful (here, here, here).

I also decided to write a book (which is still in the editing process).


In May, I reviewed Cesar Hidalgo's book Why Information Grows (with a quasi-follow up here).


June had some light blogging for a variety of reasons, but I did write posts relating regulators in quantum field theory to discount factors (and explained why Ole Peters' claim of non-ergodicity is wrong), came up with a new way of framing unemployment data (that turns out might well have some legs, and which Roger Farmer suggested I publish), and applied information equilibrium to the urban environment.


In July, Todd Zorick (MD, neuroscience researcher, and frequent commenter on the blog) and I finally managed to get our paper published on applying information equilibrium to EEG measurements (which includes a connection between the information transfer index and Lyapunov exponents). Gregor Semieniuk and I started some cross-pollination on statistical equilibrium as a useful principle to understand economic data. I turned this connection into a mini-seminar on the economic state space and the information transfer (IT) index.


In August, I constructed a New Keynesian DSGE model in terms of information equilibrium relationships. There are some slight differences, but the main takeaway is that the NK DSGE model incorporates much less empirically accurate IE relationships than I use for the IE models and forecasts.


I took on the analogy people make between string theory and macroeconomics; I think it is misguided and misunderstands both string theory and the real issues with macro.

I also re-wrote the Kaldor facts of growth economics in terms of information equilibrium, talked about how causal entropy may be useful in economics (quantitatively), and how we can understand Christopher Sims work on information theory in economics in the context of information equilibrium.


I took on Steve Keen (and nonlinear models in general) in October. More accurately, I seconded Roger Farmer's take and added a bit. This was re-tweeted by Noah Smith and Roger Farmer (among others), making it my most widely read post of the year besides the SFC critique in March.

I also subjected the IE models to the same forecasting tests that DSGE (and other economic models) had failed, and it generally did a much better job (here, here).

A new long time series database became available and I did a first take in a series of posts, looking at principal components (which turn out to be dominated by the US).


November kicked off with an unproductive back and forth with George Blackford about Milton Friedman's "as if" methodology (which I think is a distraction ‒ the real problem is not that Friedman said it was fine if bad assumptions lead to macro models that explain the data; the real problem is the lack of macro models that explain the data).

The most interesting thing I did was try to understand an agent-based model put together by Ian Wright (which he sent me a link to) in terms of information equilibrium; I think this may turn out to be a fruitful avenue of research.

My most popular post of November (it got several re-tweets H/T Ninja Economics) was unfortunately a bit embarrassing for me because it is poorly written. That's happened a couple times before ‒ something I just quickly produced got picked up somewhere and I immediately wish I had spent a little more time writing it.


My blogging usually drops off in December due to the holidays (it's also the end of the budget year at my real job, so most projects come to an end making it the opportune time to take a longer than usual vacation), and this year was no different. However, I did manage to put together a "general theory" of stock market prices in terms of information equilibrium ... as well as this post.

*  *  *

Here's to a better 2017. Thank you to everyone for reading and re-tweeting. A couple of shout outs to commenters Todd Zorick (we finally finished that paper), Tom Brown (and his boundless enthusiasm), and Jamie (for challenging me, allowing me to get better at explaining the information equilibrium framework).

Thanks to Cameron Murray, Roger Farmer, Nick Rowe, David Glasner, Diane Coyle, Paul RomerNinja Economics, Unlearning Economics, Pedro SerĂ´dio, Jo Michell, Claudia Sahm, Lionel YelibiNoah Smith, Mike Sankowski, Tom Hickey, Igor Carron, and Tom Brown for the re-tweets, likes, and engagement on Twitter and on blogs (probably missed someone). [I did, and have gone back into this list to add a few names.]

Thanks to Cameron (again) and Brennan Peterson for editing/reading/commenting on my forthcoming book.

Tuesday, December 27, 2016

Emergent immoral genes

Nick Rowe linked to an interesting article on critiques of evolutionary biology, and I think it might have a relationship to emergent rational agents. But first, I think Nick is right to suggest replacing "evolutionary biology" with "economics"; this sentence is golden:
... something about [economics] makes it prone to the championing of ideas that are new but false or unimportant, or true and important, but already well studied under a different branding.
This almost exactly echoes Thomas Palley (via Simon Wren-Lewis) on MMT:
The criticism of MMT is not that it has produced nothing new. The criticism is that MMT is a mix of old and new, the old is correct and well understood, while the new is substantially wrong.
Anyway, there were a couple of quotes that caught my attention regarding morality and evolution:
[For some] natural selection causes problems, not only because it is mindless and amoral, but because it can seem downright immoral. For example, Saunders (2003) writes “there is a further danger, as well. Darwinist explanations inherently invoke selfishness and greed as the most important driving forces”. This isn’t true, and even Darwin’s own emphasis on “struggle” probably rests on a mistake (Lewens 2010), but there is a very weak sense in which natural selection involves competition, and there is a lot of research on “conflicts”.
While naturalistic, the theories superficially resemble a transcendental account of value (they provide criteria for judging behaviours as better or worse, without reference to anybody’s attitudes), but the values that they superficially endorse are unattractive (the imaginary motives of the imaginary agents are generally base), and in some accounts, the imaginary agents are not even humans. Anxieties can be real, even if they are baseless, and the aims of these critics are best viewed as therapeutic.
Pointing out the amorality of evolution and evolutionary agents is perfectly in line with the critique (e.g. here or here) that "Homo economicus" is totally unlike Homo sapiens in the sense that we are more cooperative or generous (more "moral") than the rational agents used in economic models. In evolution, we have the agents of evolution such as unicellular organisms or even genes acquiring (im)moral agendas despite the fact that they are incapable of actually having a moral philosophy.

We might instead view this the "agenda" of evolution as emergent from random exploration of the state space in the same way that selfish H. economicus can possibly emerge from exploration of the economic state space by perfectly moral H. sapiens or even C. capucinus. There are additional parallels in the concepts of evolutionary "fitness" and economic "efficiency" (optimality) that also could emerge from exploration of the state space.

The key point is that there is no reason to assume that human morality or the amorality of a strand of DNA won't give rise to an emergent immoral, but correct, theory at the macro scale. We might intuitively view optimizing behavior as selfishness, but there is no reason to apply that judgement to the substrate. That economics is understood in terms of emergent selfish agents does not mean humans are selfish ‒ or more importantly should be selfish ‒ anymore than we should ascribe moral theories to collections of Carbon, Oxygen, Hydrogen, Phosphorus, and Nitrogen atoms in a strand of DNA.


PS While we may observe some process at the macro scale (like evolution, or macroeconomic fluctuations), we cannot be certain about (or more precisely, come up with a meaningful definition of) the underlying level of agents responsible for that process. Ascribing the unit of evolution to organisms (e.g. Darwin), genes (e.g. Dawkins), groups, or at multiple levels (e.g. Wilson) becomes something of a Sorites paradox. There is no "evolution" except at the macro scale ‒ agents just have behaviors of varying complexity. Likewise, there is no "recession" except at the macro scale ‒ agents just have behaviors of varying complexity.

Skidelsky's lazy econ critique

A lot of people have been linking to Robert Skidelsky's critique of economics at Project Syndicate so I had to finally read it. It's pretty much a "lazy econ critique" per Noah Smith, and while I agree with the general idea that people should have broad educations, some of it just doesn't make sense. I've tackled this somewhat at random, so basically there are a bunch of quotes with my issues following ...

*  *  *
Most economics students are not required to study psychology, philosophy, history, or politics.
I think this is a British thing, so maybe we should just stop listening to British economists? Joking aside, undergraduate economics majors in the US do have history requirements as well as other social science requirements. (Noah Smith points out that intro undergraduate economics actually does touch on a lot of history and doesn't get very mathematical [per below].) Graduate economics students either were undergraduate economics students or came from other disciplines that also require electives. It's true that grad students probably aren't required to take psychology or history (though I strongly suspect at least some economic history is required as part of a econ PhD) but by the time you reach graduate school you should be focusing on your chosen discipline.

As a physicist, it's true to say I wasn't "required" to study specific things aside from physics but I did study linguistics, philosophy, international relations, government, history, history of science, art, molecular biology, chemistry, computer science, and Spanish. My undergraduate education is probably not entirely unrepresentative, and people with a desire to be an economist would likely place more emphasis on the fields Skidelsky points out.
Schumpeter got his PhD in law; Hayek’s were in law and political science ...
I'm not sure there was a widely granted PhD in economics back then (1906 for Schumpeter, 1921 & 1923 for Hayek) — I wasn't able to find a good reference. Schumpeter's advisor was considered an economist, so as far as I'm concerned his PhD is in economics. Irving Fisher received the first PhD in economics granted by Yale and he cobbled his 1892 thesis together from theoretical physics (advisor Willard Gibbs) and sociology (advisor William Summer) — a forerunner of today's interdisciplinary PhDs? And I'm not sure there really was a major bright line between political science and economics (regarding Hayek).
[Hayek] also studied philosophy, psychology, and brain anatomy.
Aside from the philosophy, the other two would have been full of bad theories and errors (Freud, phrenology). I'm not sure how learning stuff that is wrong should be listed in a positive light.
Economics – how markets works, why they sometimes break down, how to estimate the costs of a project properly — ought to be of interest to most people.
I disagree that the bolded item is the purview of economics. That's business, accounting, plus whatever discipline the project belongs to — are we building a bridge (engineering) or running an ad campaign (advertising)?
Economists claim to make precise what is vague, and are convinced that economics is superior to all other disciplines, because the objectivity of money enables it to measure historical forces exactly, rather than approximately.
No, this is a category error. Examining things that are measurable does not make them precise or exact, just measurable. That allows you to compare your theories to data, and if it is not measurable then it's useless as science. Also, is Skidelsky claiming the objects of economic study are inherently vague? The idea of studying something is precisely that it becomes less vague as you study it; does it stop being part of economics when it is no longer vague? Skidelsky might as well have said economics studies things that can't be understood and then dressed up as a Zen Buddhist. 
Not surprisingly, economists’ favored image of the economy is that of a machine. The renowned American economist Irving Fisher actually built an elaborate hydraulic machine with pumps and levers, allowing him to demonstrate visually how equilibrium prices in the market adjust in response to changes in supply or demand. 
If you believe that economies are like machines, you are likely to view economic problems as essentially mathematical problems. The efficient state of the economy, general equilibrium, is a solution to a system of simultaneous equations.
I thought modern economists studied DSGE models, which are decidedly not models of machines (unless you think of a ribosome as a "machine" [YouTube]). They are stochastic models; while we were young we may have had cars that worked stochastically, but that is not the typical image conjured by the word "machine". In any case, using Fisher's machines as part of a critique of modern economics is disingenuous because no one uses Fisher's models (they aren't DSGE models).

I also disagree with the implication. "Mechanical" thinking does not invariably lead to mathematics (Faraday comes to mind) and mathematics does not only follow from "mechanical" thinking. Mathematics is the formalization of the concept of relationship. It follows from any kind of thinking besides irrational thinking. If you think things are related, there is some mathematics behind it. If you think A causes B or that there is a pattern in a set of data, then you are using math. Just because you might not know what the math is does not mean it does not exist. That is your own failure of imagination — your own ignorance.
One can understand why economists trained in this way were seduced by financial models that implied that banks had virtually eliminated risk.
To quote Noah Smith: Someone doesn't know the difference between econ and financial engineering! This appears to be a reference to a specific piece of the derivation Black-Scholes equation where you can "cancel" risk by constructing a particular portfolio. LTCM failed spectacularly trying to implement this, but it didn't bring down the economy. It was really the EMH that lead to the idea that the market would discipline bad behavior of banks if they were allowed to invest (hence deregulating/repealing Glass-Steagall), which lead to the spectacular failure of the "shadow banking" sector involved in the Global Financial Crisis. There is no real model behind the EMH except that prices are random, and the EMH (the economy is a random process) is a pretty good null hypothesis going in.
Joseph Schumpeter and Friedrich Hayek, the two most famous Austrian economists of the last century, also attacked the view of the economy as a machine. Schumpeter argued that a capitalist economy develops through unceasing destruction of old relationships. For Hayek, the magic of the market is not that it grinds out a system of general equilibrium, but that it coordinates the disparate plans of countless individuals in a world of dispersed knowledge.
Hayek viewed the market as a system for communicating information – like the internet, which is a machine. Schumpeter's creative destruction and waves of growth are precisely the kind of counterbalancing forces that make up an oscillating circuit (another machine):

In any case, both Hayek and Schumpeter are positing theories that can be represented in terms of mathematics, simulated with algorithms, and (most importantly) compared to data. The fact that Hayek and Schumpeter did not do so does not mean their ideas are non-mathematical or non-mechanical. They were just lawyers and political scientists who didn't know how to mathematically present their theories and therefore failed to imagine how they could be presented mathematically. If I don't know electronics or computer programming, I might well fail to imagine something I want to build can be built as an electronic system. And I might even say that writing apps for iPhones will never create anything useful.

I've said it before and I'll say it again: people who avoid mathematical descriptions of models or processes are either not trained in mathematics or trying to avoid confronting the data. It's probably because when they do confront the data, the data rejects the theory (e.g. almost no information from prices appears to be used, and Schumpeter's waves, like Keen's, have no observations in economic data that would differ from explanation in terms of stochastic time series).
[Economists] don’t even read the classics of their own discipline.
As a physicist, I've never read Aristotle or even Newton. Part of participating in a living scientific discipline is that no one owns the ideas, so therefore anyone can have a hand at explaining them. Over time through the variations in explanations new explanations will arise that are superior to the original in any number of ways ‒ being more accurate, more general, more intuitive, or simply clearer.

Understanding quantum electrodynamics as an effective field theory (as it is today) makes much more sense than understanding it as a fundamental theory (as it was from its inception through the 1970s). Understanding Newtonian physics as a consequence of Galilean invariance is a much more powerful concept than understanding it as a series of axioms.

Lex II: Mutationem motus proportionalem esse vi motrici impressae, et fieri secundum lineam rectam qua vis illa imprimitur.
is not more useful than understanding symmetry and Noether's theorem. Heck, what Newton wrote isn't even how it's understood today. A somewhat direct translation is
Second Law: The alteration of motion is ever proportional to the motive force impressed; and is made in the direction of the right line in which that force is impressed.
The modern understanding is:
Second Law: The change of momentum of a body is proportional to the impulse impressed on the body, and happens along the straight line on which that impulse is impressed.
In the modern understanding, momentum and impulse have specific definitions that did not exist at the time of Newton.

The fact that no one reads the "classics" of economics means that they're probably garbage. If they were really insightful, we would have developed models from them that performed well when compared to data. And in that case you wouldn't need the classics, just the models. However, there are no accurate models in economics. That must mean there are no accurate ideas for models to be based upon in those classics. For example, Keynes' ideas incorporated in the ISLM model are not correct except possibly when inflation is low (and/or interest rates are at the zero lower bound), and even then only as an estimate of the direction of effects, not magnitudes. It's a start — as Noah puts it in the previous link it might "point us in the direction of models that might work someday". A lot of what Keynes wrote was vague talk about animal spirits and persistently high unemployment (which does not appear to happen, by the way, ergo the theory is rejected).

A scientific discipline is not sentimental. The classic works are not talismans, and any ideas they contained have been either successfully extracted and reprocessed by thousands of working scientists into useful theories or dropped. Paul Krugman is a bit more charitable than I am, but the end result is the same:
So, first of all, my basic reaction to discussions about What Minsky Really Meant — and, similarly, to discussions about What Keynes Really Meant — is, I Don’t Care. I mean, intellectual history is a fine endeavor. But for working economists the reason to read old books is for insight, not authority; if something Keynes or Minsky said helps crystallize an idea in your mind — and there’s a lot of that in both mens’ writing — that’s really good, but if where you take the idea is very different from what the great man said somewhere else in his book, so what? This is economics, not Talmudic scholarship.
*  *  *

I agree with Skidelsky that people should have broad educations. However this seems like a glittering generality that isn't even a real problem. I've met very few people with advanced degrees that are completely ignorant of "psychology, philosophy, history, or politics". Usually the kinds of people curious enough to take up even a "soft" science like economics have outside interests and general reading habits that expose them to research in other disciplines. Case in point: I'm a physicist that took up economics as a hobby. I've read papers by Daniel Kahneman and other research into neuroscience. I published a paper with Todd Zorick on EEG analysis. Nick Rowe linked to a piece on evolutionary biology the other day (and has suggested things in the past). Nearly every economist seems to make some claims about physics (here, here). Intelligent people usually have diverse interests. 

The problem with (macro)economics is that there aren't any theories that match the data. But it's not like there are theories in disciplines outside economics that do match the economic data and economists just don't learn about them because their education isn't broad enough. They don't exist (or at least haven't been peer reviewed); they haven't been left off the grad school curriculum.

In the end, it's just (as David Andolfatto put it) "whining and crying" — what really needs to be done is to improve or come up with new theories that explain the data. Reading the classics and getting a broader education are not obvious steps to accomplish that.


I do think mathematics education should be improved among economists, political scientists, and historians. It would help prevent the kind of ignorance of what mathematics is that Skidelsky demonstrates above, and possibly help economists understand limits and regulate infinities better.

Sunday, December 25, 2016

Stocks and k-states, part III

Adding to this post [1] on the information/statistical equilibrium picture of the stock market, I should note that the ratio $M/B$ is (one version of) "Tobin's Q", making $Q$ proportional to the stock price $p$ (or aggregate industry stock price $\Sigma_{i \in I} \; p_{i}$):

p \equiv \frac{dM}{dB} = k \; \frac{M}{B} = k \; Q

This wouldn't necessarily predict investment (per Tobin's original argument cited here), but as described in [1] can be used to understand price dynamics. The information equilibrium framework is actually agnostic about the underlying dynamics ‒ assuming only that they're algorithmically complex.

Peak monetary base

Adjusted monetary base from FRED.

And this actually did turn out to be nearly peak monetary base ... (from January of 2015). It was off by about 2 billion dollars (less than 0.05%).


Populist election results in 1892 (from Wikimedia Commons). Coincidentally, the same year as Irving Fisher's thesis.

Simon Wren-Lewis has an interesting post dissecting and defining populism (mostly in light of Brexit and UK austerity). However I think I have a simpler definition: populism is just the zero-sum heuristic applied at the macro scale.

Trade protectionism, immigration restrictions, and even free silver from the populism of yore are all policies that aim their negative effects at some "other" (foreign countries, foreign people, big city banks) and hope to positively impact the group advocating the policy (native country, citizens, rural farmers). What makes them populist is that the positive impacts are "derived" via the zero-sum heuristic from the obvious and direct negative impacts; no constructive argument is made in favor of the positive impacts. Tariffs purportedly benefit e.g. domestic steel by making imported steel less competitive, not by making domestic steel better, increasing domestic steel productivity, or incentivizing entrepreneurship in the steel industry [1].

Since zero-sum bias is easy for humans (especially with regard to desirable resources like jobs) while many positive macroeconomic policy impacts are not zero-sum, the end result of populism is a tendency towards bad policy.


[1] In particular, steelworkers in favor of tariffs on Chinese steel would likely also dislike the idea of another steel start-up undercutting prices and paying lower wages.

Friday, December 16, 2016

Stocks and k-states, part II

I've updated the post on stock market returns and the three-factor model with some examination of the data. Much like the case with prices or profit rates, we can think of there being a macroeconomic statistical equilibrium of IT index states.

Wednesday, December 14, 2016

Stocks and $k$-states

Noah Smith has a fun article at Bloomberg View about how the EMH is becoming Ptolemaic, with epicycles upon epicycles. I thought it would be a good jumping off point for discussing stock market investing based on the information equilibrium model.

Now some disclaimers. First: financial. I own a few shares of Boeing stock and have my 401(k) invested in index funds. Second: invest at your own risk. I am potentially a crackpot physicist who thinks he understands economics and finance. Do you really want to trust your life savings to such a person?

Anyway, that aside, I will start with the basic information equilibrium model between "market capitalization" $M$ and "book widgets" or "book value" $B$ which is

\text{(1) }\; p \equiv \frac{dM}{dB} = k \; \frac{M}{B}

also written $p : M \rightleftarrows B$ where $p$ is the stock price. This is basically the model from this post. And we'd expect a stock price to evolve as a random walk with drift per this post.

However, since we are looking a "the market", we actually want an ensemble of companies $p_{i} : M_{i} \rightleftarrows B_{i}$ (previously discussed here and here using an ensemble of labor markets). This results in a "statistical equilibrium" distribution of information transfer indices $k_{i}$ that looks something like this:

Each little box represents a particular $k_{i}$ state and since a stock price grows as

\text{(2) }\;p_{i} \sim B_{i}^{k_{i}-1}

a higher $k$ is associated with higher growth. Great. But that distribution represents a statistical equilibrium based on macroeconomic constraints. We'd imagine individual companies changing from one $k$ state to another over time, but this does not necessarily change the distribution. It turns out we can kind of understand the French and Fama three-factor model (as well as other factors Noah discusses) as a description of the properties of the $k$-states making up this distribution.

First, the famous "beta" ($\beta$). According to Noah's article, the idea that $\beta$ leads to excess returns isn't well-supported by the data. This makes sense because high $\beta$ essentially means that company is close to the peak of the distribution pictured above. In terms of $k$, it means a stock with high $\beta$ will have typical $k$, and so won't outperform the market. This may prevent you from making mistakes and under-performing, but it won't typically yield excess returns.

Small cap excess return is mean reversion.

Second, the "SMB" term for "small minus big", or the excess return of small cap (i.e. small $M_{i}$) stocks over large. Small cap stocks likely haven't been in high $k$ states for long. If they had, they'd be large cap stocks because $M \sim e^{k\;r\;t}$ where $r$ is some underlying fundamental growth rate in the economy. In a sense, this accounts for a sort of mean reversion: if $k$ has been low and were in statistical equilibrium, we'd expect a high $k$ in the future. Basically, mean reversion.

Excess return of value over growth is mean reversion.

Third, the "HML" term for the high book to market cap ratio minus low. This is the excess return of "value" stocks over "growth" stocks. In terms of the model, high book to market cap ratio means low market cap to book ratio, or low $M/B$ which is (proportional to) the stock price $p$ from equation (1). Now via equation (2) above implies a high market to book ratio (high stock price) means that the $k$-state has been large (compared to the average), so from mean reversion (assuming statistical equilibrium, i.e. that the distribution above stays the same) we'd expect a small $k$-state in the future.

Momentum is a tendency for k to change slowly.

Finally, there's momentum. This isn't included in the three-factor model but is supported by the data. This would imply that the $k$-state changes slowly enough that you should be able to catch a high $k$ stock while it is still rising.

So there we have it. SMB, HML, and momentum are explainable (in the information equilibrium model) as statements about statistical equilibrium, mean reversion, and the rate of change of $k$ states. The model also explains why $\beta$ isn't a good measure.

This model also says that there shouldn't be many more useful factors since given a distribution you really only have the rate of draws from that distribution (rate of change of $k$ states) and mean reversion. There could be more nuanced ones based on the particular form of the distribution (which we don't know) ‒ and there definitely could be fundamentals based factors.

Another interesting takeaway is that this isn't the EMH plus risk. Each price should follow a random walk in the short run (per the link above), which is the essence of the EMH piece. But high $k$ does not necessarily follow from high risk. At best, we can say that high $k$ means that demand is disproportionately high relative to the available book value. This doesn't say much more than simply saying the stock is "desirable". It could be a well run or consistently profitable company that may pay high dividends (and therefore not risky). It could be "the next big thing" (and therefore risky). Excess returns  in the information equilibrium model  are basically fundamentals and fads. However, due to macroeconomic constraints  the statistical equilibrium distribution pictured above  you should have some indication whether your excess returns are about to regress to the mean.


Update 15 December 2016

You may ask whether that empirical distribution pictured above qualitatively describes real data. It's difficult to find graphs out there already made that show the information in the correct way. Many show the distribution of daily returns, but those will be swamped by the noise of the day-to-day random walk. Many show the return of the distribution of the returns from the S&P500, but that's not the returns of individual stocks. What you need is performance of many individual stocks over a longer period to get the time-average. I plan on using Wolfram data servers to do my own version, but I found a blog post out there that did the required calculation for 481 stocks for a full year (2013):

The average is 29.6% gain. If we consider the NGDP growth rate to be the underlying rate $r$ described above, this implies a the average $k \sim$ 9.1 (NGDP growth was 3.26% in 2013). So to translate between the graphs in the post above, you'd divide the percentages by about 3 (i.e. $k \sim$ 90/3 = 30 for the 90% bin).

PS You'd divide by a negative -2.1% for 2009 -- for example, Google/Alphabet (GOOG) lost about 30% (-30%) over 2009, which implies a $k \sim$ 15.


Update 16 December 2016

I created the above graph using a random sample of 50 NYSE listed stocks for the years 2000 to 2010. Here is a plot of the mean $k$-value (using NGDP growth as the underlying rate $r$) versus time (with the NBER recession indicated in gray):

And here is an animation of the distribution (the blue curve is more meant to guide the eye, showing that the distribution of $k$ states is roughly stable compared to the version below):

You can see the recession as a major deviation (non-ideal information transfer), but otherwise the distribution is fairly stable. Constructing the $k$ value from the cumulative return is essential. The two graphs below show that the distribution moves around a lot more if you neglect the underlying rate of economic growth (and the recession isn't associated with any interesting features of the graph):

Update 25 December 2016

I should note that the ratio $M/B$ is (one version of) "Tobin's Q", making $Q$ proportional to the stock price $p$ (or aggregate industry stock price $\Sigma_{i \in I} \; p_{i}$):

p \equiv \frac{dM}{dB} = k \; \frac{M}{B} = k \; Q

This wouldn't necessarily predict investment (per Tobin's original argument cited here), but as described above it can be used to understand price dynamics (in a statistical sense).

Update 4 January 2017

Here's a different random sample of the NYSE, but looking over 20 years from 1990 to 2010. First, the $k$-state distribution

And here is the ordinary cumulative return that is far less stable:

Update 14 January 2017

The above correlated movement in the collapse of 2008-9 is motivation for this picture of the financial sector (gray, government in blue):


Update 10 May 2017

I couldn't use an animation in a static presentation, but it made me realize just showing the frames was a good way to see the distribution:

Sunday, December 4, 2016

Stock-flow consistency is tangential to stock-flow consistent model claims

Update 3 February 2017: Welcome visitors from some unknown link. (If someone could leave a comment saying where you stumbled upon this post, I'd be interested. It's rapidly becoming my most popular post of all time, but I have no idea why.) Also, you might find this of interest: desired wealth to income ratio (mentioned below) as an information equilibrium model.
Update 25 August 2017: Welcome visitors from some unknown link. Maybe you might be interested in my book A random physicist takes on economics: out now on!
Steve Roth has a nice article up at Evonomics about the only recently available data from the Fed's quarterly Z.1 report. And as a description of the data available, it's great. And more data is always great.

However, the article is unfortunately framed in terms of "Stock Flow Consistent" (SFC) analysis and "Modern Monetary Theory" (MMT). Here's a good discussion of SFC (actually a response to response to a previous post about SFC) from Simon Wren-Lewis. I've talked about SFC before, but I thought I'd do a thorough job here using an analogy from engineering.

Let me say this clearly: SFC analysis is tangential to the results claimed by SFC models.

That's the charitable way of putting it. The uncharitable way of putting it is that SFC is obfuscation designed to cover up the fact that the underlying model is entirely ad hoc. Wren-Lewis gives the example of a desired wealth to income ratio. Sure, it's a ratio of a stock and a flow, so it's best to be consistent with them. But the assertion that the ratio is relevant to macroeconomics is of questionable empirical validity.

I like Wren-Lewis's description:
It is true that stock-flow accounting is important in modelling, in the sense that doing it stops you making silly errors.
SFC is like the "natural" label on food in the US. More literally, saying something is SFC is little more than asserting that you got the math right. "An SFC model" is basically just "a model".

That's the TL;DR. Here's the full account ...

*  *  *

Ok, let's begin. Stock-flow consistency basically uses a matrix to make sure that all the "money" is accounted for, whether it is a lump of money ("a stock") or money moving from one element (node) in the model to another ("a flow"). It's essentially a kind of conservation law (there's non conservation via e.g. revaluation that I'll mention later). In fact, SFC is practically isomorphic to another set of rules that follow from conservation laws: Kirchhoff's circuit laws.

The currents through a node must sum to zero and the sum of voltage drops around a loop must equal zero. You can think of the currents as flows of electrons and the voltages as stocks of electrons [1]. You are now armed with the equivalent of SFC for electric circuits. So, what does this circuit do?

No idea, right? Kirchhoff's laws tell us that

V_{0} + V_{1} + V_{2} + V_{3} = 0

The current law only tells us there's a single current. If the block on the left was a battery (voltage $V_{0}$) and the three blocks on the right were resistors ($R_{i}$, $V_{i \neq 0} = 0$), then you'd say that there's a steady state current that sets up such that

V = i (R_{1} + R_{2} + R_{3})

But that required us to add in Ohm's law ($V = i R$). Kirchhoff's laws only told us that

V_{0} + V_{1} + V_{2} + V_{3} = 0

we used Ohm's law to say

V_{0} - i R_{1} - i R_{2} - i R_{3} = 0

Ok, so SFC, I mean, Kirchhoff plus Ohm's "behavior" law let's us understand how this circuit works. You may have noticed that I just blocked out the elements in that diagram; let's analyze the original circuit:

Ah, an RLC circuit. We can used Kirchhoff's voltage law to give us

V_{R}(t) + V_{L}(t) + V_{C}(t) = V(t)

Now what? Well, we need to know how the various components work. This doesn't come from Kirchhoff's laws, but rather the behavior of the components (e.g. Ohm's law). We find that (after differentiating)

\frac{d^{2}i}{dt^{2}} + \frac{R}{L} \; \frac{di}{dt} + \frac{1}{LC} i = 0

Cool. Now we've narrowed down the behavior of the economy, I mean, circuit right? Not really. All of these functions are solutions to that differential equation:

So we could have an oscillating circuit ($R = 0$), a damped oscillator, or simply decay ($L = 0$). The behavior is characterized by the parameters $R/L$ (decay time) and $1/LC$ (oscillation frequency (squared)). The simple monetary model SIM in Godley and Lavoie's SFC/MMT tome can be thought of as the saturating voltage of an RC circuit (where they tell us what the voltage $V$ is, but "hide" $RC$ by defining it to be 1).

This has almost nothing to do with Kirchhoff's laws. Actually, it has nothing to do with Kirchhoff's laws. Why? Because you can get this exact same behavior from a pendulum where the equations are built using balancing forces [2].

Note that as it appears, energy is not conserved when $R > 0$; that's because we lose energy to heat in the resistor ($P = i^{2} R$). We could think of the "revaluation" that happens in an SFC model as a time-reverse of this power loss. In any case, that depends on thermodynamics (blackbody radiation) and diffusion, not Kirchhoff. The chemical reactions in a battery produce its voltage drop that is the source of the current. Again, not Kirchhoff.

The takeaway is that Kirchhoff's laws are simply one way to put a bunch of things that are models unto themselves together. You could put those elements together in other ways. And the behavior of the assembled circuit depends on the models of the elements. Kirchhoff's laws (and likewise, SFC) is just one step. And it's just one step in getting to the result assuming you are starting with that step. We can get to a damped oscillator is at least two ways:

  1. Kirchhoff's laws
  2. models of components
  3. damped oscillator


  1. free body diagram
  2. (isomorphic) damped oscillator

There's also [2].

Similarly, an SFC model depends on the assumed behavior of the households, governments, firms, banks, whatever. In fact, Jo Michell pointed out something that Simon Wren Lewis noted in that link at the top of the page:
Any behavioural model contains some kind of theory. What I think I said was that [SFC] models often seemed ‘light on theory’, which means that they talk a great deal about the accounting and rather little about theory. ... For example, to say that consumers have a desired wealth to income ratio is light on theory. Why do they have such a ratio? Is it because of a precautionary motive? If it is, that will mean that this desired ratio will be influenced by the behaviour of banks. The liquidity structure of wealth will be important, so they may react differently to housing wealth and financial assets. Now the theory behind the equations in the Bank’s paper may be informed by a rich theoretical tradition, but it is normal to at least reference that tradition when outlining the equations of the model. ... If the point is to emphasise that stocks matter to behavioural decisions about flows, then that is making a theoretical point. As Jo says, DSGE models are stock-flow consistent, but in the basic model consumers have no desired wealth ratio: it is the latter that matters. So when Jo says this absence should ring alarm bells, he is making a theoretical statement.
When Wren Lewis says "light on theory", think ad hoc or assumed behavior of the components. Instead of Ohm's law coming from experiments with materials, the SFC models just assert some sort behavior for households. The example in the quote is a desired wealth to income ratio. This is like asserting that a component in a circuit tend to act such that

\frac{V}{I} \rightarrow \; \text{constant}

that is to say a component that attempts to achieve a specific impedance. I don't know if such a component is commonly available (it would have to adjust its inductance and/or capacitance depending on the input frequency, but my circuit theory is rusty), but if you wanted to assert it in a circuit, you'd have to design and test it first.

Stock-flow consistency is taken to absolve SFC models of these unfounded assertions [3] about the behavior of components. But those assertions are doing most of the work.

The information equilibrium model is completely compatible with stock-flow consistency (you can rewrite SFC models as information equilibrium models). How would the SFC advocates feel if I started calling my models SFC models? Sure, this model is completely stock-flow consistent, but it has all kinds of behaviors that depend on the entropy of the various "accounts" (see also here) [4].

And that's one of the issues with MMT and post-Keynesian economics. I think the unifying element is SFC. However, since SFC doesn't narrow things down all that much, and since the real behavior of models depends on the the pieces that are beyond SFC (i.e. models depend on the component behavior, not Kirchhoff's laws) what you have is just a bunch of different ideas that should really be tested separately.

*  *  *

Speaking of emergent concepts like entropy, what if we build a more complex nonlinear circuit? Like this:

And put several of them together in a network like this:

While Kirchhoff's laws are still in effect, once we reach this level of system complexity the stocks and flows of electrons stop being a useful way to understand the circuit. It's far better to look at the circuit as a "learning machine" -- in fact, the details of the underlying circuit stop mattering. Much like our behavior is "emergent" from the simpler behaviors of neurons, macroeconomics is (probably) "emergent" from the behaviors of households and firms. You could potentially use the SFC framework to set up a model of an economy (maybe even an accurate one!). However, much like the oscillating circuits above (or oscillating chemical reactions here [4]), the SFC is tangential [5] to the final result. I imagine you could build an SFC model that reproduces the IS-LM model. But you can end up with the IS-LM model lots of different ways (say, like this).

If you don't like the reservoir computing analogy, just think of the collection of capacitors, transistors, and other elements making up the computer's processor you're using now to read this. Is it an ARM? An i7? Running iOS running Chrome? Running Windows 10 running Firefox? At that level, understanding how those circuits work has little to do with Kirchhoff's laws, barely anything to do with registers and assembly language, and only tangentially related to the network protocols connecting computers on the Internet. In fact, the software was likely programmed using C++ or Java and compiled to work for your OS (which works for your processor). I didn't even have to write up most of the html used to encode this document and serve it to your browser.

Now there is nothing wrong with trying to understand Twitter using Kirchhoff's laws (theoretically you should be able to), it's just that several layers of abstraction exist between you and the electrons.

And that's the issue. As I mentioned above, the results of SFC analysis have little to do with the SFC itself, but instead depend on the assumptions about the behavior of the "circuit elements" (firms, households, government) about which SFC analysis tells us almost nothing.

*  *  *


[1] This is not exactly the best analogy (I'll leave that as an exercise for the reader; hint: what is the voltage law conservation of?). The "perfect" analogy is a bit more abstract, but not more illuminating.

[2] Another way to look at this is "effective field theory" where you have a field $x(t)$ and you simply say that

0 = c_{0} + c_{1} x(t) + c_{2} \frac{dx}{dt} +c_{3} \frac{d^{2}x}{dt^{2}} + c_{4} \frac{d^{3}x}{dt^{3}} + \cdots

and fit the coefficients $c_{i}$. This is closer to the modern way physicists would look at the harmonic oscillator if starting from scratch.

[3] Speaking of unfounded, does anyone know exactly how SFC was involved in Steve's claims about predicting the global financial crisis?
These accounting-based economists more than any others managed to accurately predict our recent Global Great Whatever. And Wynne Godley, rather the pater familias of MMT, predicted the current Euro crisis in amazingly precise and accurate detail — in 1992, before the project was even launched.
All of those predictions (at the links in the original quote) were basically that the housing crisis (which starts in 2005 or 2006) would likely lead to a recession (none say a financial crisis, just growth stagnation, unemployment -- the closest is a "bear market"). This has nothing to do with SFC -- a shock to consumption because people discovered they are poorer than they were does not require detailed analysis of flows. For example, Krugman.

Many people saw that the Euro monetary union without fiscal union would lead to economic crisis based on mainstream theory (e.g. optimum currency area, again Krugman).

[4] This points to another way to obtain oscillation without "Kirchhoff's laws": chemical oscillation.

[5] Sure, I have no problem starting with an SFC matrix as a way to begin the modeling process. However, there are other issues because SFC models tend to have lots of (sometimes hidden) parameters (such as the identification problem).