Monday, December 11, 2017

JOLTS data day!

The latest data from the Job Openings and Labor Turnover Survey is out today on FRED and we're here with another update of the forecast performance/recession indicator. Here are the hires and openings data:


Here's the update of the hires shock counterfactual evolution (a fall in hires might be a leading indicator):


Here's the updated Beveridge curve as well:


Sunday, December 10, 2017

Another unemployment rate forecast comparison

Paul Romer tweeted a graph of an unemployment rate projection. I'm not sure where it came from — my guess is the World Bank — but I thought I'd add it to the forecast from the dynamic information equilibrium model last updated here. Already it (thick dark blue line) seems to be fairly wrong (no confidence limits were given; the dynamic equilibrium model uses 90%):


Of course the dynamic information equilibrium forecast is conditional on the lack of shocks (which can be identified via the algorithm discussed here). The forecast Romer tweeted could be the result of a very broad but small amplitude shock to the dynamic equilibrium model, but such a shock would be unlike any other adverse shock in the US data since the Great Depression.

Saturday, December 9, 2017

Latest unemployment numbers and structural unemployment

The latest monthly unemployment numbers for the US came out on Friday (unchanged at 4.1% from last month) and so I've yet again put the new data points on my old model forecast graphs to see how they're performing (just great, by the way — more details are below). There were several mentions of the old "structural unemployment" argument (against fiscal or monetary stimulus) given in the wake of the financial crisis saying that the arguments hadn't held up well as unemployment has fallen to the lowest levels in years. In particular, Paul Krugman noted:
Remember when all the Very Serious People knew that high unemployment was structural, due to a massive skills gap, and could never be expected to return to pre-crisis levels?
He linked back to an old blog post of his where he showed an analysis from Goldman Sachs about state unemployment rates and then looked at unemployment rates and the subsequent recovery by occupation. The data showed that occupations (and states) that had been hit harder (unemployment increased more) had recovered faster (unemployment had declined more). Krugman said this indicated unemployment was cyclical, not structural:
So the states that took the biggest hit have recovered faster than the rest of the country, which is what you’d expect if it was all cycle, not structural change. ... the occupations that took the biggest hit have had the strongest recoveries. In short, the data strongly point toward a cyclical, not a structural story ...
What was interesting to me was that the data Krugman showed was actually just a result of the dynamic information equilibrium model — the larger the shock, the faster the fall since the dynamic information equilibrium is a constant slope of (d/dt) log u(t). In fact, the data Krugman shows match up pretty well with the result you'd expect from the dynamic equilibrium model:


This tells us that the dynamic equilibrium is the same across different occupations (much like how the dynamic equilibrium is the same for different races, or for different measures of the unemployment rate). All of this tells us that unemployment recoveries [1] are closer to a force of nature (or "deep structural parameters" in discussions of the Lucas critique). But on another level, this is also just additional confirmation of the usefulness of the dynamic equilibrium model for unemployment.

*  *  *

As I mentioned above, I also wanted to show how the forecasts were doing. The first graph is the model forecast alone. The second graph shows comparisons with the (frequently revised) forecasts from the FRB SF. The third graph shows a comparison with the (also revised) forecast from the FOMC.





...

Footnotes:

[1] The shocks to unemployment are non-equilibrium processes in this model. It remains an open question whether these shocks can be affected by policy, or whether they too are a force of nature.

Tuesday, December 5, 2017

Does increased compensation cause increased productivity?

Noah Smith has an article at Bloomberg View asking why compensation hasn't risen in lockstep with productivity. Recent research seems to say it at least rises a little when output rises in the short run, but not one-for-one:
This story gets some empirical support from a new study by economists Anna Stansbury and Larry Summers, presented at a recent conference at the Peterson Institute for International Economics. Instead of simply looking at the long-term trend, Stansbury and Summers focus on more short-term changes. They find that there’s a correlation between productivity and wages — when productivity rises, wages also tend to rise. Jared Bernstein, senior fellow at the Center on Budget and Policy Priorities, checked the results, and found basically the same thing.
I thought the long run data would be a good candidate for the dynamic information equilibrium model, but came out with some surprising results. It's true that the models appear correlated. Real output per hour (OPH) seems to rise faster at 1.46%/y while real compensation per hour (CPH) rises at about 0.45%/y. This has held up throughout the data that isn't being subject to a non-equilibrium shock (roughly the "Great Moderation" and the post-global financial crisis period).

But the interesting part of this particular framing of the data is the timing of the shocks — shocks to real compensation per hour precede shocks to real output per hour:


The shocks to CPH (t = 1952.1 and t = 1999.6) precede the shocks to OPH (t = 1959.8 and t = 2001.0). Real compensation increases before real output increases. It's not that compensation captures some part of rising output; it's that giving people raises increases productivity.

Now it is entirely possible this framing of the data isn't correct (there is a less statistically significant version of the dynamic equilibrium that sees the periods of the shocks as the equilibrium and the 80s and 90s, as well as the post-crisis period as the shocks). However there is some additional circumstantial evidence that the productivity shocks correspond to real world events. The late 90s shock seems associated with the introduction of the internet to a wider audience than defense and education, while the 40s and 50s shock is likely associated with a post-war increase in production efficiency in the US. It is possible increased compensation is due to increased skills required to use new technologies and methods — with those raises and increased starting salaries needing to happen before firms can implement these technology upgrades [it costs more to get labor with the latest skills]. Those could well be just-so stories (economists like stories, right?), but I believe the most interesting aspect is simply the plausible existence of this entirely different (but mathematically consistent) way to look at the data along with the entirely different policy implications (i.e. needing to find ways to directly raise wages instead of looking for ways to increase growth or productivity).

...

Update: I added a bit of clarifying text — e.g. "upgrades" and bracketed parenthetical — in the last paragraph that was ambiguous. Also the reference to "post-crisis period" replaces "2010s" because the latter could be confused with the actual shock to output in 2008/9 whereas I am actually referring to the the period after the shock that we are still in.

Monday, December 4, 2017

Supply and demand and science

Sometimes recognizing a symmetry is the biggest step.

Sometimes when you give a seminar or teach a class a student or attendee brings up a point that perfectly sets you up to explain your topic. A few tweets from @UnlearningEcon and Steve Roth gave me this opportunity today:
UE: Confused by the amount of times it is claimed demand and supply has loads of empirical evidence behind it when I've barely seen any 
UE: Naming a couple of obvious insights or point estimates is not sufficient to prove the full model! Can somebody give me an actual falsifiable test, please? 
UE: Conclusion: demand-supply is largely an article of faith which people 'prove' with a couple of casual observations. Science! 
SR: Thing is you can't measure demand (desire) and supply (willingness) to buy/sell — necessarily, across a range of prices at a point in time. Only observe the P/Q where they meet. Why S/D diagrams are always dimensionless.
There's an almost perfect analogy here with the concept of "force" in physics. Force F, often described using the equation F = m a [1] or better F = dp/dt, is actually a definition. That is to say it's F ≡ dp/dt. At the time of Newton [2], it was an article of faith. It was an article of faith that organized a bunch of disparate empirical "point estimates" and insights from Kepler and Galileo.

That is all to say Newton represents more of a statement of "looking at it this way, it's much simpler" than an application of the oversimplified "scientific method" we all were taught in school that involves an hypothesis, collecting data, and using that data to confirm or reject the hypothesis. Unfortunately contributions to science like Newton's aren't easily reproduced in classrooms so most people end up thinking the hypothesis testing with data is all there is. Note that Einstein's famous contributions were like Newton's in the sense that the organized a bunch of disparate empirical point estimates (in this case they were deviations from a Newtonian world).

Though Newton and Einstein get all the plaudits, both of their big contributions are really specific instances of what is probably one of the greatest contributions to physics of all time: Noether's theorem. Emmy Noether was asked by David Hilbert about energy conservation in General Relativity, but she ended up proving the more general result that conservation laws are consequences of symmetry principles. Newton's symmetry was Galilean invariance; Einstein's were Lorentz covariance (special relativity) and general covariance (general relativity). Newton's laws are really just a consequence of conservation of momentum.

That gives us a really good way to think about Newton-like contributions to science: they involve recognizing (or steps towards recognizing) general symmetry principles.

What does this have to do with supply and demand?

This is where Steve Roth's tweet and the work on this blog comes in. Supply and demand relationships seem to be a consequence of scale invariance (the dimensionlessness Steve points out) of information equilibrium [3]. In fact, realizing supply and demand is a consequence of the scale invariance encapsulated by the information equilibrium condition gives us a handle on the scope conditions — where we should expect supply and demand to fail. And since those scope conditions (which involve e.g. changing supply much faster than demand can react) can easily fail in real-world scenarios, we shouldn't expect supply and demand as a general theory to always be observed and empirically validated. Good examples are labor and housing markets where it is really hard to make supply change faster than demand (in the former case because adding workers adds wage earners adding demand, and in the latter case because it is impossible to add housing units fast enough). What we should expect is that when the right conditions are in effect, supply and demand will be a useful way to make predictions. One of my favorites examples uses Magic, the Gathering cards.

Yes, I realize I have obliquely implied that I might be the Isaac Newton of economics [4]. But that isn't the main point I am trying to make here. I'm trying to show how attempts at "falsification" aren't the only way to proceed in science. Sometimes useful definitions help clarify disparate insights and point estimates without being directly falsifiable. The information equilibrium approach is one attempt at understanding the scope conditions of supply and demand. There might be other (better) ones. Without scope conditions however, supply and demand would be either falsified (since counterexamples exist) or unfalsifiable (defined in such a way as to be unobservable [5]).

...

Footnotes

[1] You may think that mass and acceleration are pretty good direct observables making force an empirical observation. While acceleration is measurable, mass is problematic given that what we really "measure" is force (weight) in a gravitational field (also posited by Newton). Sure, this cancels on a balance scale (m₁ g = m₂ g → m₁ = m₂), but trying to untangle the epistemological mess is best left to arguments over beer.

[2] Actually Newton's Lex II was a bit vague:
Lex II: Mutationem motus proportionalem esse vi motrici impressae, et fieri secundum lineam rectam qua vis illa imprimitur.
A somewhat direct translation is:
Second Law: The alteration of motion is ever proportional to the motive force impressed; and is made in the direction of the right line in which that force is impressed.
The modern understanding is:
Second Law: The change of momentum of a body is proportional to the impulse impressed on the body, and happens along the straight line on which that impulse is impressed.
Where momentum and impulse now have very specific definitions as opposed to "motive force" and "motion". This is best interpreted mathematically as

I ≡ Δp

where I is impulse and p is the momentum vector. The instantaneous force is (via the fundamental theorem of calculus, therefore no assumptions of relationships in the world)

I = ∫ dt F

F ≡ dp/dt

where p is the momentum vector. The alteration of "motion" (i.e. momentum) is Δp (or infinitesimal dp), and the rest of the definition says that the F vector (and impulse vector I) is parallel to the p vector. Newton would have writen in his own notes something like f = ẋ using his fluxions (i.e. f = dx/dt).

[3] I've talked about this on multiple occasions (here, here, here, or here).

[4] At the rate at which new ideas become incorporated into economic theory, I will probably have been dead for decades and someone else (with the proper credentials) will have independently come up with an equivalent framework.

[5] People often make the point that demand isn't directly observable (or as Steve Roth says, neither supply or demand are observable). My tweet-length retort to this is that the wavefunction in quantum mechanics isn't directly observable either. In fact, given the scale invariance of the information equilibrium condition, we actually have the freedom to re-define demand as twice or half any given value. This is analogous to what is called gauge freedom in physics (a result of the gauge symmetry). The electric and magnetic potentials are only defined up to a "gauge transformation" and are therefore not directly observable.

To me, this is a very satisfying way to think about demand. It is not a direct observable, but we can compute things with a particular value knowing that we can scale it to any possible value we want (at least if we are careful not to break the scale invariance in the same way you try not to break the gauge invariance in gauge theories). Nominal GDP might be an incomplete measure of aggregate demand, but so long as aggregate demand is roughly proportional to NGDP we can proceed. What is important is whether the outputs of the theory are empirically accurate, such as this example for the labor market.

Information transfer economics: year in review 2017

I finally published my book in August of this year. Originally I was just going to have an e-book, but after requests for a physical paperback version I worked out the formatting. I'm glad I did — it looks nice!

With 2017 coming to a close, I wanted to put together a list of highlights like I did last year. This year was the year of dynamic information equilibrium as well as presentations. It was also the year I took some bigger steps in bringing my criticisms of economics and alternative approaches to the mainstream, having an article at Evonomics and publishing a book.

I'd like to thank everyone who reads, follows and shares on Feedly and Twitter, or who bought my book. It is really only through readers, word of mouth, and maybe your own blog posts on information equilibrium (like at Run Money Run) that there is any chance the ideas presented here might be heard or investigated by mainstream economists.

I'd also like to thank Cameron Murray for a great review of my book, Brennan Peterson for helping me edit my book, as well as Steve Roth at Evonomics (and Asymtosis) for being an advocate and editor of my article there.

Dynamic information equilibrium


The biggest thing to happen was the development of the dynamic information equilibrium approach to information equilibrium. The seeds were planted in the summer of 2014 in a discussion of search and matching theory where I noted that the rate of unemployment recovery was roughly constant — I called it a "remarkable recovery regularity". Another piece was looking at how the information equilibrium condition simplifies given an exponential ansatz. But the Aha! moment came when I saw this article at VoxEU.org that plotted the log of JOLTS data. I worked out the short "derivation", and applied it to the unemployment rate the next day.

Since that time, I have been tracking forecasts of the unemployment rate (and other measures) using the dynamic equilibrium model. I even put together what I called a "dynamic equilibrium history" of the US contra Friedman's monetary history. As opposed to other economic factors and theories, the post-war economic history of the US is almost completely described by the social transformation of women entering the workforce. Everything from high inflation in the 70s to the fading Phillips curve can be seen as a consequence of this demographic change.

Dotting the i's and crossing the t's


Instead of haphazardly putting links to my Google Drive, I finally created Github repositories for the Mathematica (in February) and eventually Python (in July) code. But the most important thing I did theoretically was rigorously derive the information equilibrium conditions for ensembles of markets which turned out to be formally similar equations to individual markets. This was a remarkable result (in my opinion) because it means that information equilibrium could apply to markets for multiple goods — and therefore macroeconomic systems. In a sense it makes rigorous the idea that the AD-AS model is formally similar to a supply and demand diagram (and under what scope it applies). The only difference is that we should also see slowly varying information transfer indices which would manifest by e.g. slowing growth as economic systems become large.

Connections to machine learning?


These are nascent intuitions, but there are strong formal similarities between information equilibrium and Generative Adversarial Networks (GANs) as well as a theoretical connection to what is called the "information bottleneck" in neural networks. I started looking into it this year, and I hope to explore these ideas further in the coming year!

Getting the word out


Over the past year or so, I think I finally reached a point where I sufficiently understood the ideas worked through on this blog that I could begin outreach in earnest. In May I published an article at Evonomics on Hayek and the price mechanism that works through a the information equilibrium approach and connection to Generative Adversarial Networks (GANs). In August, I (finally) published my book on how I ended up researching economics, on my philosophy to approaching economic theory, as well as some of the insights I've learned over four years of work.

I also put together four presentations throughout year (on dynamic equilibrium, a global overview, on macro and ensembles, and on forecasting). Several of my presentations and papers are collected at the link here. In November, I started doing "Twitter Talks" (threaded tweets with one slide and a bit of exposition per tweet) which were aided by the increase from 140 to 280 characters — in the middle of the first talk! They were on forecasting, macro and ensembles, as well as a version of my Evonomics article.

*  *  *

Thanks for reading everyone! This blog is a labor of love, written in my free time away from my regular job in signal processing research and development.

Saturday, December 2, 2017

Comparing the S&P 500 forecast to data (update)

I haven't updated this one in awhile — last time in September — most because there seem to be some issues with Mathematica's FinancialData[] function such that it's no longer pulling in the archived data that computes the projection. So I did a kind of kludgy workaround where I just overlaid an updated graph of the latest data on an old graphic:


Thursday, November 30, 2017

Comparing my inflation forecasts to data

Actually, when you look at the monetary information equilibrium (IE) model I've been tracking since 2014 (almost four years now with only one quarter of data left) on its own it's not half-bad:


The performance is almost the same as the NY Fed's DSGE model (red):


A more detailed look at the residuals lets us see that both models have a bias (IE in blue, NY Fed in red):


The thing is that the monetary model looks even better if you consider the fact that it only has 2 parameters while the NY Fed DSGE model has 41 (!). But the real story here is in the gray dashed and green dotted curves in the graph above. They represent an "ideal" model (essentially a smoothed version of the data) and a constant inflation model — the statistics of their residuals match extremely well. That is to say that constant inflation captures about as much information as is available in the data. This is exactly the story of the dynamic information equilibrium model (last updated here) which says that PCE inflation should be constant [1]:


Longtime readers may remember that I noted a year ago that a constant model didn't do so well in comparison to various models including DSGE models after being asked to add one to my reconstructions of the comparisons in Edge and Gurkaynak (2011). However there are two additional pieces of information: first, that was a constant 2% inflation model (the dynamic equilibrium rate is 1.7% [2]); second, the time period used in Edge and Gurkaynak (2011) contains the tail end of the 70s shock (beginning in the late 60s and persisting until the 90s) I've associated with women entering the workforce:


The period studied by Edge and Gurkaynak (2011) was practically aligned with a constant inflation period per the dynamic information equilibrium model [3]. We can also see the likely source of the low bias of the monetary IE model — in fitting the ansatz for 〈k〉 (see here) we are actually fitting to a fading non-equilibrium shock. That results in an over-estimate of the rate of the slow fall in 〈k〉 we should expect in an ensemble model, which in turn results in a monetary model exhibiting slowly decreasing inflation over the period of performance for this forecast instead of roughly constant inflation.

We can learn a lot from these comparisons of models to data. For example, if you have long term processes (e.g. women entering the workforce), the time periods you use to compare models is going to matter a lot.  Another example: constant inflation is actually hard to beat for inflation in the 21st century — which means the information content of the inflation time series is actually pretty low (meaning complex models are probably flat-out wrong). A corollary of that is that it's not entirely clear monetary policy does anything. Yet another example is that if 〈k〉 is falling for inflation in the IE model, it is a much slower process than we can see in the data.

Part of the reason I started my blog and tried to apply some models to empirical data myself was that I started to feel like macroeconomic theory — especially when it came to inflation — seemed unable to "add value" beyond what you could do with some simple curve fitting. I've only become more convinced of that over time. Even if the information equilibrium approach turns out to be wrong, the capacity of the resulting functional forms to capture the variation in the data with only a few parameters severely constrains the relevant complexity [4] of macroeconomic models.

...

Footnotes:

[1] See also here and here for some additional discussion and where I made the point about the dynamic equilibrium model as constant inflation mode before.

[2] See also this on "2% inflation".

[3] You may notice the small shock in 2013. It was added based on information (i.e. a corresponding shock) in nominal output in the "quantity theory of labor" model. It is so small it is largely irrelevant to the model and the discussion.

[4] This idea of relevant complexity is related to relevant information in the information bottleneck as well as effective information in Erik Hoel's discussion of emergence that I talk about here. By related, I mean I think it is actually the same thing but I am just too lazy and or dumb to show it formally. The underlying idea is that functions with a few parameters that describe a set of data well enough is the same process in the information bottleneck (a few neuron states capture the relevant information of the input data) as well as Hoel's emergence (where you encode the data in the most efficient way — the fewest symbols).

Tuesday, November 28, 2017

Dynamic information equilibrium: world population since the neolithic

Apropos of nothing (well, Matthew Yglesias's new newsletter where he referenced this book from Kyle Harper on Ancient Rome), I decided to try the dynamic information equilibrium model on world population data. I assumed the equilibrium growth rate was zero, and fit the model to data. The prediction is about 12.5 billion humans in 2100 (putting it at the somewhat middle-higher end of these projections) with an equilibrium population at about 13.4 billion.

There were four significant transitions in the data centered at 2390 BCE, 500 BCE, 1424, and 1954. The widths (transition durations) were ~ 1000 years, between 0 and 100 years (highly uncertain, but small), ~ 300 years, and ~ 50 years, respectively. Historically, we can associate the first with the neolithic revolution following the Holocene Climate Optimum (HCO). The second appears right around the dawn of the Roman Republic. The third follows the Medieval Warm Period (MWP) and is possibly another agricultural revolution that is ending, while the final one is our modern world and is likely associated with public health and medical advances (it began near the turn of the century in 1900). Here's what a graph looks like:


I included some random items from (mostly) Western history to give readers some points of reference. The interesting thing is that "exponential growth" with a positive growth rate of 1% to 2% is really only a local approximation. Over history, the population growth rate is typically zero:


Some major technology developments seem to happen on the leading edge of these transitions (writing, money, horse collar/heavy plow, computers). Maybe a more systematic study of technology might yield some pattern — my hypothesis (i.e. random guess) is that there are bursts of tech development associated with these transitions as people try to handle the changes in society during the population surges. There are also likely social organization changes as well — the third transition roughly coincides with the rise of nation-states, and the fourth with modern urbanization.

Tuesday, November 21, 2017

Dynamic information equilibrium: UK CPI

The dynamic information equilibrium CPI model doesn't just apply to the US. Here is the UK version (data is yellow, model is blue):


The forecast is for inflation to run at about 2.1% (close to the US dynamic equilibrium of 2.5%) in the absence of shocks:



Monday, November 20, 2017

Numerical experiments and the paths to scientific validity

Christiano et al got much more attention than their paper deserved by putting in a few choice lines in it (Dilettantes! Ha!). Several excellent discussions of the paper — in particular this aspect — are available from Jo Mitchell, Brad DeLong (and subsequent comments), and Noah Smith.

I actually want to defend one particular concept in the paper (although as with most attempts at "science" by economists, it comes off as a nefarious simulacrum). This will serve as a starting point to expand on how exactly we derive knowledge from the world around us. The idea of "DSGE experiments" was attacked by DeLong, but I think he misidentifies the problem [1]. Here is Christiano et al:
The only place that we can do experiments is in dynamic stochastic general equilibrium (DSGE) models.
This line was attacked for its apparent mis-use of the word "experiment", as well as the use of "only". It's unscientific! the critics complain. But here's an excerpt from my thesis:
The same parameters do a good job of reproducing the lattice data for several other quark masses, so the extrapolation to the chiral limit shown in Fig. 2.3 is expected to allow a good qualitative comparison with the instanton model and the single Pauli-Villars subtraction used in the self-consistent calculations.
Lattice data. I am referring to output of lattice QCD computations that are somewhat analogous to using e.g. the trapezoid rule to compute integrals as "data" — i.e. the output of observations. Robert Waldman in comments on DeLong's post makes a distinction between hypothesis (science) and conjecture (math) that would rule out this "lattice QCD data" as a result of "lattice QCD experiments". But this distinction is far too strict as it would rule out actual science done by actual scientists (i.e. physicists, e.g. me).

Saying "all simulations derived from theory are just math, not science" misses the nuance provided by understanding how we derive knowledge from the world around us, and lattice QCD provides us with a nice example. The reason we can think of lattice QCD simulations as "experiments" that produce "data" is that we can define a font of scientific validity sourced from empirical success. The framework lattice QCD works with (quantum field theory) has been extensively empirically validated. The actual theory lattice QCD uses (QCD) has been empirically validated at high energy. As such, we can believe the equations of QCD represent some aspect of the world around us, and therefore simulations using them are a potential source of understanding that world. Here's a graphic representing this argument:


Of course, the lattice data could disagree with observations. In that case we'd likely try to understand the error in the assumptions we made in order to produce tractable simulations or possibly limit the scope of QCD (e.g. QCD fails at low energy Q² < 1 Gev²).

The reason the concept of DSGE models as "experiments" is laughable is that it fails every step in this process:


Not only does the underlying framework (utility maximization) disagree with data in many cases, but even the final output of DSGE models also disagrees. The methodology isn't flawed — its execution is.

*  *  *

The whole dilettantes fracas is a good segue into something I've been meaning to write for awhile now about the sources of knowledge about the real world. I had an extended Twitter argument with Britonomist about whether having a good long think about a system is a scientific source of knowledge about that system (my take: it isn't).

Derivation

The discussion above represents a particular method of acquiring knowledge about the world around us that I'll call derivation for obvious reasons. Derivation is a logical inference tool: it takes empirically mathematical descriptions of some piece of reality and attempts to "derive" new knowledge about the world. In the case of lattice QCD, we derive some knowledge about the vacuum state based on the empirical success of quantum field theory (math) used to describe e.g. magnetic moments and deep inelastic scattering. The key here is understanding the model under one scope condition well enough that you can motivate its application to others. Derivation uses the empirical validity of the mathematical framework as its source of scientific validity.

Observation

Another method is the use of controlled experiments and observation. This is what a lot of people think science is, and it's how it's taught in schools. Controlled experiments can give us information about causality, but one of the key things all forms of observation do is constrain the complexity of what the underlying theory can be through what is sometimes derided as "curve fitting" (regressions). Controlled experiments and observation mostly exclude potential mathematical forms that could be used to describe the data. A wonderful example of this is blackbody radiation in physics. The original experiments basically excluded various simple computations based on Newton's mechanics and Maxwell's electrodynamics. Fitting the blackbody radiation spectrum curve with functional forms of decreasing complexity ultimately led to Planck's single parameter formula that paved the way for quantum mechanics. The key assumption here is essentially Hume's uniformity of nature to varying degrees depending on the degree of control in the experiment. Observation uses its direct connection to empirical reality as its source of scientific validity.
Indifference

A third method is the application of the so-called "principle of indifference" that forms the basis of staticial mechanics in physics and is codified in various "maximum entropy" approaches (such as the one used in the blog). We as theorists plead ignorance of what is "really happening" and just assume what we observe is the most likely configuration of many constituents given various constraints (observational or theoretical). Roderick Dewar has a nice paper explaining how this process is a method of inference giving us knowledge about the world, and not just additional assumptions in a derivation. As mentioned the best example is statistical mechanics: Boltzmann assumed simply that there were lots of atoms underlying matter (which was unknown at the time) and used probability to make conclusions about the most likely states — setting up a framework that accurately describes thermodynamic processes. The key assumption here is that the number of underlying degrees of freedom is large (making our probabilistic conclusions sharper), and "indifference" uses the empirical accuracy of its conclusions as the source of its scientific validity.
Other paths?

This list isn't meant to be exhaustive, and there are probably other (yet undiscovered!) paths to scientific validity. The main conclusion here is that empirical validity in some capacity is necessary to achieve scientific validity. Philosophizing about a system may well be fun and lead to convincing plausible stories about how that system behaves. And that philosophy might be useful for e.g. making decisions in the face of an uncertain future. But regardless of how logical it is, it does not produce scientific knowledge about the world around us. At best it produces rational results, not scientific ones.

In a sense, it's true that the descriptions above form a specific philosophy of science, but they're also empirically tested methodologies. They're the methodologies that have been used in the past to derive accurate representations of how the world around us works at a fundamental physical level. It is possible that economists (including Christiano et al) have come up with another path to knowledge about the world around us where you can make invalid but prima facie sensible assumptions about how things work and derive conclusions, but it isn't a scientific one.

...

Footnotes:

[1] Actually, the problem seems misidentified in a similar way that Friedman's "as if" methodology is misidentified: the idea is fine (in science it is called "effective theory"), but the application is flawed. Friedman seemed to first say matching the data is what matters (yes!), but then didn't seem to care when preferred theories didn't match data (gah!).

Friday, November 17, 2017

The "bottom up" inflation fallacy

Tony Yates has a nice succinct post from a couple of years ago about the "bottom up inflation fallacy" (brought up in my Twitter feed by Nick Rowe):
This inflation is caused by the sum of its parts problem rears its head every time new inflation data gets released. Where we can read that inflation was ’caused’ by the prices that went up, and inhibited by the prices that went down.
I wouldn't necessarily attribute the forces that make this fallacy a fallacy to the central bank as Tony does — at the very least, if central banks can control inflation, why are many countries (US, Japan, Canada) persistently undershooting their stated or implicit targets? But you don't really need a mechanism to understand this fallacy, because it's actually a fallacy of general reasoning. If we look at the components of inflation for the US (data from here), we can see various components rising and falling:


While the individual components move around a lot, the distribution remains roughly stable — except for the case of the 2008-9 recession (see more here). It's a bit easier to see the stability using some data from MIT's billion price project. We can think of the "stable" distribution as representing a macroeconomic equilibrium (and the recession being a non-equilibrium process). But even without that interpretation, the fact that an individual price moves still tells us almost nothing about the other prices in the distribution if that distribution is constant. And it's definitely not a causal explanation.

It does seem to us as humans that if there is something maintaining that distribution (central banks per Tony), then an excursion by one price (oil) is being offset by another (clothing) in order to maintain that distribution. However, there does not have to be any force acting to do so.

For example, if the distribution is a maximum entropy distribution then the distribution is maintained simply by the fact that it is the most likely distribution (consistent with constraints). In the same way it is unlikely that all the air molecules in your room will move to one side of it, it is just unlikely that all the prices will move in one direction — but they easily could. For molecules, that probability is tiny because there are huge numbers of them. For prices, that probability is not as negligible. In physics, the pseudo-force "causing" the molecules to maintain their distribution is called an entropic force. Molecules that make up a smell of cooking bacon will spread around a room in a way that looks like they're being pushed away from their source, but there is no force on the individual molecules making that happen. There is a macro pseudo-force (diffusion), but there is no micro force corresponding to it.

I've speculated that this general idea is involved in so-called sticky prices in macroeconomics. Macro mechanisms like Calvo prices are in fact just effective descriptions at the macro scale, and therefore studies that look at individual prices (e.g. Eichenbaum et al 2008) will not see stick prices.

In a sense, yes, macro inflation is due to the price movements of thousands of individual prices. And it is entirely possible that you could build a model where specific prices offset each other via causal forces. But you don't have to and there exist ways of constructing a model where there isn't necessarily any way to match up the macro inflation with specific individual changes because macro inflation is about the distribution of all price changes. That's why I say the "bottom up" fallacy is a fallacy of general reasoning, not just a fallacy according to the way economists understand inflation today: it assumes a peculiar model. And as Tony tells us, that's not a standard macroeconomic model (which is based on central banks setting e.g. inflation targets).

You can even take this a bit further and argue against the position that microfoundations are necessary for a macroeconomic model. It is entirely possible for macroeconomic forces to exist for which there are no microeconomic analogs. Sticky prices are a possibility; Phillips curves are another. In fact, even rational representative agents might not exist at the scale of human beings, but could be a perfectly plausible effective degrees of freedom at the macro scale (per Becker 1962 "Irrational Behavior and Economic Theory", which I use as the central theme in my book).

Thursday, November 16, 2017

Unemployment rate step response over time

One of the interesting effects I noticed in looking at the unemployment rate in early recessions with the dynamic equilibrium model was what looked like "overshooting" (step response "ringing" transients). For fun, I thought I'd try to model the recession responses using a simple "two pole" model (second order low pass system).

For example, here is the log-linear transformation of the unemployment rate that minimizes entropy:


If we zoom in on one of the recessions in the 1950s, we can fit it to the step response:


I then fit several more recessions. Transforming back to the original data representation (unemployment rate in percent), and compiling the results:


Overall, this was just a curve fitting exercise. However, what was interesting were the parameters over time. These graphs show the frequency parameter ⍵ and the damping parameter ζ:


Over time, the frequency falls and the damping increases. We can also show the damped frequency which is a particular combination of the two (this is the frequency that we'd actually estimate from looking directly at the oscillations in the plot):


With the exception of the 1970 recession, this shows a roughly constant fairly high frequency that falls after the 1980s to a lower roughly constant frequency.

At this point, this is just a series of observations. This model adds far too many parameters to really be informative (for e.g. forecasting). What is interesting is that the step response in physics results from a sharp shock hitting a system with a band-limited response (i.e. the system cannot support all the high frequencies present in the sharp shock). This would make sense — in order to support higher frequencies, you'd probably have to have people entering and leaving jobs at rates close to monthly or even weekly. While some people might take a job for a month and quit, they likely don't make up the bulk of the labor force. This doesn't really reveal any deep properties of the system, but it does show how unemployment might well behave like a natural process (contra many suggestions e.g. that it is definitively a social process that cannot be understood in terms of mindless atoms or mathematics).

Wednesday, November 15, 2017

New CPI data and forecast horizons

New CPI data is out, and here is the "headline" CPI model last updated a couple months ago:


I did change the error bar on the derivative data to show the 1-sigma errors instead of the median error in the last update. The level forecast still shows the 90% confidence for the parameter estimates. 

Now why wasn't I invited to this? One of the talks was on forecasting horizons:
How far can we forecast? Statistical tests of the predictive content
Presenter: Malte Knueppel(Bundesbank)
Coauthor: Jörg Breitung
A version of the talk appears here [pdf]. One of the measures they look at is year-over-year CPI, which according to their research seems to have a forecast horizon of 3 quarters — relative to a stationary ergodic process. The dynamic equilibrium model is approaching 4 quarters:


The thing is, however, the way the authors define whether the data is uninformative is relative to a "naïve forecast" that's constant. The dynamic equilibrium forecast does have a few shocks — one centered at 1977.7 associated with the demographic transition of women entering the workforce, and one centered at 2015.1 I've tentatively associated with baby boomers leaving the workforce [0] after the Great Recession (the one visible above) [1]. But for the period from the mid-90s after the 70s shock ends until the start of the Great Recession would in fact be this "naïve forecast":


The post-recession period does involve a non-trivial (i.e. not constant) forecast, so it could be "informative" in the sense of the authors above. We will see if it continues to be accurate beyond their forecast horizon. 

...

Footnotes

[0] Part of the reason for this shock to posited is its existence in other time series.

[1] In the model, there is a third significant negative shock centered at 1960.8 associated with a general slowdown in the prime age civilian labor force participation rate. I have no firm evidence of what caused this, but I'd speculate it could be about women leaving the workforce in the immediate post-war period (the 1950s-60s "nuclear family" presented in propaganda advertising) and/or the big increase in graduate school attendance.

Friday, November 10, 2017

Why k = 2?

I put up my macro and ensembles slides as a "Twitter talk" (Twalk™?) yesterday and it reminded me of something that has always bothered me since the early days of this blog: Why does the "quantity theory of money" follow from the information equilibrium relationship N M for information transfer index k = 2?

From the information equilibrium relationship, we can show log N ~ k log M and therefore log P ~ (k − 1) log M. This means that for k = 2 

log P ~ log M

That is to say the rate of inflation is equal to the rate of money growth for k = 2. Of course, this is only empirically true for high rates of inflation:


But why k = 2? It seems completely arbitrary. In fact, it is so arbitrary that we shouldn't really expect the high inflation limit to obey it. The information equilibrium model allows all positive values of k. Why does it choose k = 2? What is making it happen?

I do not have a really good reason. However, I do have some intuition.

One of the concepts in physics that the information equilibrium approach is related to is diffusion. In that case, most values of k represent "anomalous diffusion". But ordinary diffusion with a Wiener process (a random walk based on a normal distribution) results in diffusion where the distance traveled goes as the square root of the time step σ ~ √t. That square root arises from the normal distribution, which is in fact a universal distribution (there's a central limit theorem for distributions that converge to it). Another way: 

2 log σ ~ log t

is an information equilibrium relationship t σ with k = 2.

If we think of output as a diffusion process (distance is money, time is output), we can say that in the limit of a large number of steps, we obtain

2 log M ~ log N

as a diffusion process, which implies log P ~ log M.

Of course, there are some issues with this besides it being hand-waving. For one, output is the independent variable corresponding to time. This does not reproduce the usual intuition that money should be causing the inflation, but rather the reverse (the spread of molecules in diffusion is not causing time to go forward [1]). But then applying the intuition from a physical process to an economic one via an analogy is not always useful.

I tried to see if it came out of some assumptions about money M mediating between nominal output N and aggregate supply S, i.e. the relationship

N M S

But aside from figuring out that if the IT index k in the first half is k = 2 (per above), then the IT index k' for M S would have to be 1 + φ or 2 − φ where φ is the golden ratio in order for the equations to be consistent. The latter value k' = 2 − φ ≈ 0.38 implies that the IT index for N ⇄ S is k k' ≈ 0.76, while the former implies k k' ≈ 5.24. But that's not important right now. It doesn't tell us why k = 2.

Another place to look would be the symmetry properties of the information equilibrium relationship, but k = 2 doesn't seem to be anything special there.

I thought I'd blog about this because it gives you a bit of insight as to how physicists (or at least this particular physicist) tend to think about problems — as well as point out flaws (i.e. ad hoc nature) in the information equilibrium approach to the quantity theory of money/AD-AS model in the aforementioned slides. I'd also welcome any ideas in comments.

...

Footnotes:

[1] Added in update. You could make a case for the "thermodynamic arrow of time", in which case the increase in entropy is actually equivalent to "time going forward".

Interest rates and dynamic equilibrium

What if we combine an information equilibrium relationship A ⇄ B with a dynamic information equilibrium description of the inputs A and B? Say, the interest rate model (described here) with dynamic equilibrium for investment and the monetary base? Turns out that it's interesting:



The first graph is the long term (10-year) rate and the second is the short term (3 month secondary market) rate. Green is the information equilibrium model alone (i.e. the data as input), while the gray curves show the result if we use the dynamic equilibria for GPDI and AMBSL (or CURRSL) as input.

Here is the GPDI dynamic equilibrium description for completeness (the link above uses fixed private investment instead of gross private domestic investment which made for a better interest rate model):