Wednesday, July 29, 2015

Paper update

I'm still working on the paper and I hope to have a draft uploaded in the next week or two (somewhere ... probably my Google Drive as a public document initially, then submission to the economics e-journal).

As it stands, the outline is:

Information equilibrium as an economic principle

1 Introduction
2 Information equilibrium
   2.1 Supply and demand
   2.2 Alternative motivation of the information equilibrium equation
3 Macroeconomics
   3.1 AD-AS model
   3.2 Labor market and Okun's law
   3.3 IS-LM model and interest rates
   3.4 Solow-Swan growth model
   3.5 Price level and inflation
   3.6 Summary
4 Statistical economics
   4.1 Entropic forces and emergent properties
5 Summary and conclusion

And a couple of screenshots ...

Assuming complexity?

Now that's a complicated phase diagram. From here.
One thing I am importing into my approach to economics from my physics background (that may well be unwarranted) is the idea that physicists apply to fundamental physics: simplicity.

In physics, we generally think the fundamental theory of everything (and unification from EM to EW) is a simplification. This is deeply connected to the idea of a greater and greater number of symmetries of the universe as you look at smaller and smaller scales, from Lorentz symmetry to super-symmetry.

However, the simplicity I am trying to bring to macroeconomic theory is based on something entirely different: the law of large numbers. This is the simplification that happens in statistical mechanics. One interesting thing is that the law of large numbers generally has to happen as N → ∞ unless you believe in significant correlations; super-symmetry does not have to happen as L → ℓp.

Apparently, however, people seem to assume macroeconomics is complex.

This philosopher of economics thinks economics is complex. These biologists think economics is complex. The physicists who made the picture at the top of this post think economics is complex. Many economists think economics is complex. So do these people.

Why do so many people think this?

1. Economists can't predict things very well, therefore economics is complex.

To go from this premise to concluding economics is complex requires one of two things: 1) some sort of proof or evidence that current economic theory is the only possible theory or class or theories or 2) economic theory that is correct for some other reason than predictions, but says predictions are hard (e.g. the three-body problem in Newtonian physics is complex and unpredictable, but Newtonian physics is pretty good for other reasons).

An example: Aristotelian physics didn't make good predictions about gravity. Did that mean the theory of gravity was complex and unpredictable? No, we just hadn't hit on Newton or Einstein yet.

That is to say the lack of correct predictions could be evidence your theory is wrong, not that the problem is complex.

2. Humans are complex, therefore economics is complex.

A million 1000-variable human agents represents a billion-dimensional problem. If the state of the macroeconomy can be described by a few macroeconomic aggregates like NGDP, a couple of interest rates, inflation, money supply, (or whatever your choices are) and a handful of parameters (let's be generous and say it's 1000 variables/parameters), there has to be a massive amount of dimensional reduction [1].

If you haven't demonstrated that dimensional reduction does not happen, then the statement above is just an assumption that dimensional reduction does not happen. I'd say the onus is on you to tell us what the millions of additional relevant macroeconomic aggregates that must exist are.

3. We haven't figured it out yet, therefore economics is complex.

This is the fallacy of the failure of imagination. Just because it is hard to imagine a simple theory of economics, doesn't mean there isn't a simple theory out there.

For example, "intelligent design" proponents (contra evolution) have an argument that some biological structures are so complex, they couldn't have evolved. That's just a failure of imagination of the particular pathway for them to have evolved.

4. Economists disagree, therefore economics is complex.

Economists disagree because of the premise of #3: we haven't figured it out yet. The ideas behind classical mechanics weren't settled in the 1600s. That was not evidence that the future Newton's laws would be complex ... based on millions of microscopic homunculus agents.

5. There are a lot of little effects that must be considered, therefore economics is complex.

This is #2 with microeconomic or game theory effects rather than human behavior. You still have a dimensional reduction problem.

6. DSGE (or whatever) models are complex, therefore economics is complex.

So DSGE (or whatever) is right?



Complexity is not a foregone conclusion. That doesn't mean economics isn't complex, just that you shouldn't assume complexity. In the end, you shouldn't assume simplicity, either. The best approach is to start with what you do know for sure (in the posts on this blog, essentially conservation of information) and move out from there.

My purpose here is not that the complexity of macroeconomics is a false assumption, but that we should be more aware of implicit assumptions ... and the complexity of macroeconomics is one of them.


[1] I think the correct words for this are actually "dimensionality reduction", but I don't really care. I'm not sure there is a real technical term for the fact that e.g. an ideal gas reduces from a 3N dimensional problem to a 4-dimensional problem.

Monday, July 27, 2015

Biologists' unoriginal and misguided ideas about economics

Physicists and biologists trying to fix economics ...

Mark Thoma linked to this article on biologists wanting to get in on redefining economics with the bold, revolutionary new ideas such as "agent based modeling", "imperfect information" and "adding human behavior". It's pretty funny and I begin to see why economics "as a whole is resistant to outside incursions". I at least took the time to read up on the use of information theory in economics (and basic economics) before jumping in.

In reading the article, it becomes clear that the biologists' ideas to fix economics are both unoriginal and doomed to failure. At least if the information transfer view is correct. Some specific comments are below. I put links the supporting/elaborating material for the specific claims below at the bottom of the page.

But [mathematical formulae come] at the price of ignoring the complexities of human beings and their interactions – the things that actually make economic systems tick.
So you know for a fact that the complexities of human decision-making matter? How? Did you already model an economic system as complex human beings and discover this? Why not just show us that research?

Snark aside, this is a fundamental assumption of economics as well, so this is not only an ad hoc assumption, but an unoriginal ad hoc assumption.

The problems start with Homo economicus, a species of fantasy beings who stand at the centre of orthodox economics. All members of H. economicus think rationally and act in their own self-interest at all times, never learning from or considering others. ... We’ve known for a while now that Homo sapiens is not like that ...
Yes, we have known that for awhile, and yet very little has come of it. It's just another unoriginal idea.

In the information transfer view, H. economicus is an effective description, like a quasi-particle in physics. Once you integrate out the degrees of freedom from the micro scale up to the macro scale, the very complicated H. sapiens at the micro scale ends up looking like H. economicus at the macro scale much like the very complicated short range interaction of quarks and gluons ends up looking like a simple charged hard sphere (proton) at long range scales.

How different is a stock price crash from a wildlife population crash?
That is a figure caption and seemingly rhetorical in the article, but they're very different in the information transfer view. A school of fish can coordinate their direction to evade a predator. If an economic system coordinates, it collapses.

Wildlife population crashes are not usually due to coordination of the wildlife itself -- although population booms may lead to crashes. But in this case it is not the coordination itself that leads to a crash. The coordination of wildlife leads to a population boom that e.g. eats all the food resources, leading to starvation. In the information transfer framework, the coordination alone is the source of the fall in economic entropy that leads to a fall in price.

Taking into account [some effects] requires economists to abandon one-size-fits-all mathematical formulae in favour of “agent-based” modelling – computer programs that give virtual economic agents differing characteristics that in turn determine interactions.
This is definitely not original.

There is also a fundamental reason agent-based modeling is unlikely to be helpful. How many input parameters and variables does your agent have? 10? 100? How many agents do you have? 1000? 1,000,000? Your system is now a 100,000,000-dimensional problem.

How many equilibria does your 100,000,000-dimensional problem have? Well, if there aren't any symmetry considerations and your agents are complex enough to capture even a small fraction of the complexity of humans, you will have no idea. But that's supposed to be the point, right? We need to do bottom up simulations of agents because top down analysis doesn't work, or so the biologists (and well before them, micro-foundation obsessed economists) have said. But any particular equilibrium you find is going to critically depend on the initial conditions of your simulation. And that choice could give you any of the equilibria -- many of which probably look enough like a real economy to declare success even though you've just reduced the problem from solving an economy to finding the initial conditions that give you the economy you want.

A good example of how wrong-headed this approach is can be illustrated with protein folding. One thing the scientists who study protein folding don't do is just throw 5000 carbon, nitrogen, oxygen etc atoms in a box and turn the crank on the Schrodinger equation. You can get pretty much any structure you want this way (critically depending on initial conditions).

What they have noticed (empirically) are effective structures (protein secondary structures) that form many of the building blocks of proteins.

That is an example of dimensional reduction; the 45,000 dimensional problem of the 3D position and orientation of 5000 atoms has been reduced to a 90 dimensional problem of the position and orientation of 10 protein secondary structures.

If the information transfer model turns out to be correct, then a macroeconomy can be reduced from that 100,000,000-dimensional problem to a 20-dimensional problem (give or take). The agents -- and the extra 99,999,980 dimensions they contribute -- don't matter.

... economies are like slime moulds, collections of single-celled organisms that move as a single body, constantly reorganising themselves to slide in directions that are neither understood nor necessarily desired by their component parts
This biologist thinks economic systems are an analog of biological system. Physicists (including myself) tend to think economics reduces to statistical mechanics. Some engineers think in terms of fluid flows. I imagine a geologist would think of economics with a plate tectonics metaphor. Politicians probably think economics is all about the coordinated desires of people. Remarkable how people in a given field tend to think in terms of their field.

It would be amazing if economics just happened to reduce to an analog of a system in your field, wouldn't it?

In my defense, in the information transfer approach (if valid) it's the difference between thermodynamics (where there is a second law) and economics (where there isn't) that is the new idea. It is this difference -- that economic entropy can decrease spontaneously due to coordinated agent behavior -- that comes into play in showing the slime mold analogy is misguided. Whenever the slime mold moves as a single body you'd get recessions; whenever the individual cells do their own thing you'd get economic growth. Coordination, even emergent coordination, is economic death.

Continued reading ...

Econophysics for fun and profit [about taking on economics as an outsider]
Information theory and economics, a primer [on 'effective' H. economicus]
Coordination costs money, causes recessions
What if money was made of vinegar? [Dimensional reduction]
Against human centric macroeconomics [is human behavior relevant?]
Is the demand curve shaped by human behavior? How can we tell?

Kaldor, endogenous money and information transfer

Nick Edmonds read Kaldor's "The New Monetarism" (1970) a month ago and put up a very nice succinct post on endogenous money.
The first point is that, for Kaldor, the question over the exogeneity or endogeneity of money is all about the causal relationship between money and nominal GDP.  The new monetarists ... argued that there was a strong causal direction from changes in the money supply to changes in nominal GDP ... 
Endogenous money in this context is a rejection of that causal direction.  Money being endogenous means that it is changes in nominal GDP that cause changes in money or, alternatively, that changes in both are caused by some other factor.
I've talked about causality in the information transfer framework before, and I won't rehash that discussion except to say causality goes in both directions.

The other interesting item was the way Nick described Kaldor's view of endogenous money
As long as policy works to accommodate the demand for money, we might expect to see a perpetuation in the use of a particular medium - bank deposits, say - as the primary way of conducting exchange.  ...  But any stress on that relationship [between deposits and money] will simply mean that bank deposits will no longer function as money in the same way. The practice of settling accounts will adapt, so that we may need to revise our view of what money is.
One interpretation of this (I'm not claiming this as original) is that we might have a hierarchy of things that operate as "money":

  • physical currency
  • central bank reserves
  • bank deposits
  • commercial paper
  • ... etc

In times of economic boom, these things are endogenously created (pulled into existence by a the force of economic necessity). The lower on the list, the more endogenous they are. When we are hit by an economic shock stress on the system causes these relationships to break, one by one. And one by one they stop being (endogenous) money. In the financial crisis of 2008, commercial paper stopped being endogenous money.

Additionally, a central bank attempting to conduct monetary policy by targeting e.g. M2 can stress the relationship between money and deposits causing it to behave differently (which Nick reminds us is similar to the Lucas critique argument).

This brings us to an interpretation of the NGDP-M0 path as representing a "typical" amount of endogenous money that is best measured relative to M0. Call it α M0 (implicitly defined by the gray path in the graph below). At times, the economy rises above this value (NGDP creating 'money' e.g. as deposits via loans, as well as other things being taken as money like commercial paper). When endogenous money is above the "typical" value α M0, there is a greater chance it will fall (the hierarchy of things that operate as money start to fall apart when their relationship is stressed).

Another way to put this is that the NGDP-M0 path represents the steady state (or vacuum solution in particle physics) and fluctuations in endogenous money are the theory of fluctuations from the NGDP-M0 path. The theory of those endogenous fluctuations aren't necessarily causal from M2 to NGDP; however the NGDP-M0 relationship is causal both ways (in the information transfer picture).

At a fundamental level, the theory of endogenous fluctuations is a theory of non-ideal information transfer -- a theory of deviations from the NGDP-M0 path in both directions (see the bottom of this post).

Sunday, July 26, 2015

Resolving the paradox of fiat money

As the dimension of this simplex defined by the budget constraint Σ Ci = M increases, most points are near the budget constraint hyperplane ... and therefore the most likely point will be near the hyperplane.

In his recent post on neo-Fisherism, David Glasner links to an earlier post about the paradox of fiat money -- that money only has value because it is expected to have value:
But the problem for monetary theory is that without a real-value equivalent to assign to money, the value of money in our macroeconomic models became theoretically indeterminate. If the value of money is theoretically indeterminate, so, too, is the rate of inflation. The value of money and the rate of inflation are simply, as Fischer Black understood, whatever people in the aggregate expect them to be.
The problem then becomes the problem of the "greater sucker"; rational people would only accept money because they expect they will be able to find a greater sucker to accept it before its value vanishes. But since at some point e.g. the world will end and there won't be a greater sucker, the expected value should be zero today. Note that the idea of the future rushing into the present is a general problem of expectations, as I wrote about here.

After getting a question from Tom Brown about this, I started answering in comments. Now I think the information transfer framework gives us a way to invert that value argument -- that if you don't accept money, you are the greater sucker. The argument creates a stable system of fiat currencies.

The argument starts here; I'll quickly summarize the link. If we imagine a budget constraint that represents the total amount of money in the economy at a given time being used in transactions for various goods, services, investments, etc C₁, C₂, C₃, ... Cn, then using a maximum entropy argument with n >> 1 we find the most likely state of the economy saturates the budget constraint and minimizes non-ideal information transfer (see the picture at the top of this post for n = 3). And since:

k N/M ≥ dN/dM ≡ P

we can say minimized non-ideal information transfer maximizes prices for a given level of goods and services {Cn} because P is as close to k N/M as it can be. We would think of arrangements of trust (credit) or barter exchange as less ideal (very non-ideal) information transfer than using money or some money-like commodity.

This maximized monetary value critically depends on n >> 1 -- that as many goods and services are exchangeable for whatever is being used as money as possible. This means that whoever trades their goods and services for the most widely used money gets a higher price (more ideal information transfer) for those goods and services. If I don't accept money, then I'm getting a worse deal and I'm the greater sucker. That would stabilize an existing fiat currency system because if I refuse to take money, I'd contribute to the downfall of my own personal wealth. I'd also get a worse deal in that particular transaction.

I've explained this argument in terms of rational agents. However in the information transfer framework we'd think of this argument as money allowing agents to access larger portions of state space and hence achieve higher entropy. We would think of money as a dissipative structure, like convection cells in heated water or even life itself, arising in order to maximize entropy production to move the system towards equilibrium. Convection cells only cease to exist when the water reaches a uniform temperature. Analogously, money only loses its value when every scarce resource is equally allocated among everyone (the Star Trek economy) -- the economic equivalent of heat death.

Update +3 hours:

Although the money value argument admits a rational agent explanation, the truth is that there may not be any such explanation that is valid in terms of microeconomics -- that money is an emergent structure and its effects are entropic forces. The rational explanation may be like incentives or Calvo pricing: an attempt to 'microfound' a force (effect, or structure) that only exists at the macro level. Osmosis and diffusion have no microscopic mechanism (although you could invent one, an effective force [1]) and maybe the value of money has no micro explanation that is actually true.


[1] An example of a (possible) entropic force that we tend to explain with an invented micro force is gravity. We think of it as mediated by gravitons that behave similarly to photons, but it might be closer to the stickiness of glue. It is important to note that because it is an entropic macro force doesn't mean it is impossible to model as a micro force.

Wednesday, July 22, 2015

Compressed sensing, information theory and economics

A comment from Bill made me recall how I'd looked at markets as an algorithm to solve a compressed sensing problem before starting this blog. This is mostly a placeholder post for some musings.

The idea behind "compressed sensing" is that if what you are trying to sense is sparse, then you don't need as many measurements to "sense" it if you measure it in the right basis [1]. A typical example is a sparse image that looks like a star field: a few points (k = 3) in mostly blank space (first image above). If you were incredibly lucky, you could measure exactly the three points (m = k = 3) and reproduce the image. However, information theory tells us that we need (see e.g. here [pdf]):

m > k log(n/k)

measurements. As what you are trying to measure gets more complex, you start to need all of the points (m ~ n) which is behind the Nyquist sampling theorem. You can think of the economic allocation problem as fairly sparse -- most of the time any one person is not buying bacon (note the diagram on the upper right if you are viewing this on a desktop browser). And the compressed measurement (the m's) happens when you read off the market price of bacon [2].

There are different algorithms that take advantage of the information provided by knowing your image is sparse. One of the least efficient algorithms is linear programming. Sound familiar? That's also a very inefficient way to solve the market allocation problem.

The algorithms that solve the sparse problem also have a tendency to fail if m is too low or if you add noise to the image (second image above). Additionally, the transition from failure to success can be fairly sharp -- referred to as a Donoho-Tanner phase transition by Igor Carron. Does this tell us something about market behavior? I don't know. As I said, these are just some musings.


[1] For natural images, this basis tends to be the wavelet basis. For something like our star field above, the Fourier basis works.

[2] Does a market create a basis for sparsifying an economic allocation problem?

I guess I'm back

Given that I've been posting every couple of days, empirically I'd have to say I'm back.