Tuesday, March 21, 2017

India's demonetization and model scope

Srinivas [1] has been requesting that I look into India's demonetization using the information equilibrium (IE) framework for awhile now. One of the reasons I haven't done so is that I can't seem to find decent NGDP data from before 1996 or so. I'm going to proceed using this limited data set because there are several results that I think are a) illustrative of how to deal with different models with different scope, b) show that monetary models are not very useful.

Previously, the only "experiment" with currency in circulation I had encountered was the Fed's stockpiling of currency before the year 2000 in preparation for any issues:


This temporary spike had no impact on inflation or interest rates. Economists would say that the spike was expected to be taken away, and therefore there would be no impact. Scientifically, all we can say is that rapid changes in M0 do not necessarily cause rapid changes in other variables. This makes sense if it is entropic forces maintaining these relationships between macroeconomic aggregates. Another example is the Fed's recent increases in short term interest rates. The adjustment of the monetary base to the new equilibrium appears to be a process with a time scale on the order of years.

If either interest rates or monetary aggregates are changed, it takes time for agents to find or explore the corresponding change in state space.

India recently removed a bunch of currency in circulation (M0). If the historical relationship between nominal output (NGDP) and M0 were to hold, we'd get a massive fall in output and the price level. However, the change in M0 appears to be quick:


So, what do the various models have to say about this?

Interest rates

The interest rate model says that a drop in M0 should raise interest rates ceteris paribus. However this IE relationship only holds on average over several years. Were the drop in M0 to remain, we should expect higher long term interest rates in India:


However, if M0 continues to rise as quickly as it has in Dec 2016, Jan 2017, and Feb 2017, then we probably won't see any effect at all (much like the year 2000 effect described above). M0 needs to maintain a lower level for an extended period for rates to rise appreciably.

This is to say that the model scope is long time periods (on the order of years to decades), and therefore sharp changes are out of scope.

Monetary model

Previously, like many other countries, India has shown an information equilibrium relationship (described at the end of these slides) between M0 and NGDP with an information transfer index (k) on the order of 1.5. A value of k = 2 means a quantity theory of money economy, while a lower value means that prices and output respond much less to changes in M0.


In fact, as I mentioned in a post from yesterday monetary models only appear to be good effective theories when inflation is above 10%, and in that case we should find k = 2. That k < 2 implies the monetary theory is out of scope and we have something more complex happening.

The quantity theory of labor

The monetary models don't appear to be very useful in this situation. However one model that does do well for countries with k < 2 is the quantity theory of labor (and capital). This is basically the information equilibrium version of the Solow model (but deals with nominal values, doesn't have varying factor productivity, and doesn't have constant returns to scale). unfortunately the time series data doesn't go back very far and there aren't a lot of major fluctuations. Even so, the model does provide a decent description of output and inflation:


The exponents are 1.6 for capital and 0.9 for labor meaning India is a great place to get return on capital investment (the US has 0.7 and 0.8, and the UK has 1.0 and 0.5, respectively).

This model tells us that inflation is primarily due to an expanding labor force, and therefore demonetization should have little to no effect on it.

Dynamic equilibrium

The dynamic equilibrium approach to prices (price indices) and ratios of quantities has shown remarkable descriptive power as I've shown in several recent posts (e.g. here). India is no different and inflation over the past 15 years can be pretty well described by a single shock centered in late 2010 continuing over a time scale on the order of one and a half years:


This model doesn't tell us the source of the shock, but unless another shocks hits we should expect inflation to continue at the same rate as it has over the past 2 years (averaging 4.7% inflation). This also means that the demonetization should have little to no effect.

Summary

The preponderance of model evidence tells us that the demonetization should have little to no effect on inflation or output. The speed at which it was enacted means that the monetary models are out of scope and tell us nothing; we can only rely on other models that are in scope and those have no dependence on M0.

...

Footnotes

[1] Srinivas also sent me much of the data used in this post.

Monday, March 20, 2017

Using PCA to remove cyclical effects

One potential use of the principal component analysis I did a couple days ago is to subtract the cyclical component of the various sectors. I thought I'd take it a step further and use a dynamic equilibrium model to describe the cyclical principal component and then subtract the estimated model. What should be left over are the non-cyclical pieces.

First, here's the model and the principal component data; the description is pretty good:


I won't bore you with listing the results for every sector (you can ask for an update with your favorite sector in comments and I will oblige). Let me just focus on the interesting sectors with regard to the economic "boom" of 2013-2014. There are three different behaviors. The first is a temporary bump that seems to be concentrated in retail trade:


The bump begins in mid-2013 (vertical line) and ends in mid-2016.

The second behavior is job growth. For example, here is health care and social assistance; the rise begins around the date the ACA goes into effect (vertical line):


The third behavior is unique to government hiring, specifically at the state and local level. It drops precipitously at the 2016 election (vertical line):


Note that this doesn't mean hiring dropped to zero, it just mean state and local government hiring dropped back to it's cyclical level after being above it (e.g. because of the ACA, for example).

Belarus and effective theories


Scott Sumner makes a good point that inflation is not only demographics using Belarus as an example. However I think this example is a great teachable moment about effective theories. The data on inflation versus monetary base growth shows two distinct regimes; the graph depicting this above is from a diagram in David Romer's Advanced Macroeconomics. One regime is high inflation and because it is pretty well described by the quantity theory of money (the blue line) I'll call it the quantity theory of money regime. The second regime is low inflation. It is much more complex and is probably related to multiple factors at least partially including demographics (or e.g. price controls).

The scale that separates the two regimes (and that defines the scope of the quantity theory of money theory) is on the order of 10% inflation (gray horizontal line). For inflation rates ~ 10% or greater, the quantity theory is a really good effective theory. What's also interesting is that the theory of inflation seems to simplify greatly (becoming a single-factor model). It is also important to point out that there is no accepted theory that covers the entire data set ‒ that is to say there is no theory with global scope.

In physics, we'd say that the quantity theory of money has a scale of τ₀ ~ 10 years (i.e. 10% per annum). For base growth scales shorter than this time scale like, say, β₀ ~ 5 years (i.e. 20% per annum), we can use quantity theory.

At 10% annual inflation, Belarus should be decently described by the quantity theory of money with other factors; indeed base growth has been on the order of 10%.

The problem is that then Scott says:
So why do demographics cause deflation in Japan but not Belarus?  Simple, demographics don’t cause deflation in Japan, or anywhere else.
Let me translate this into a statement about physics:
So why does quantum mechanics make paths probabilistic for electrons but not for baseballs? Simple, quantum mechanics doesn’t make paths probabilistic for electrons, or anything else.
As you can see this framing of the question completely ignores the fact that there are different regimes where different effective theories can operate (quantum mechanics on scales set by de Broglie wavelengths; when the de Broglie wavelength is small you have a Newtonian effective theory).

Wednesday, March 15, 2017

Washington's unemployment rate, Seattle's minimum wage, and dynamic equilibrium

I live in Seattle and the big thing in the national news about us is that we raised our minimum wage to $15, which just went into effect for large businesses in January of this year. According to many people who oppose the minimum wage, this should have lead to disaster. Did it have an effect? Let's try and see what the dynamic equilibrium model says. I added a couple of extra potential shocks after the 2009 big one:


People who believe the minimum wage did negatively impact could probably see the negative shock centered at 2015.8 as evidence in their favor. However, that could also be the end of whatever positive shock centered at 2013.0 (which I think was hiring associated with the ACA/Obamacare). I showed what the path would look like if those shocks (positive and negative) were left out using a dotted line. If it was the minimum wage, it would have to be based entirely on expectations because it is being phased in (not reaching $15/hour for all businesses until 2020):


However those expectations did not kick in when the original vote happened in June of 2014, so it must be some very complex expectations model. In this second graph I show what the graph looks like in the absence of both the 2013 and 2015.8 shocks (shorter dashes) as well as just the absence of the 2015.8 shock (longer dashes). Various theories welcome!

The Fed raised interest rates today, oh boy

The Fed raised its interest rate target to a band between 0.75 to 1.0 percent at today's meeting, so I have to update this graph with a new equilibrium level C'':


We might be able to see whether the interest rate indicator of a potential recession has any use:


This indicator is directly related to yield curve inversion (the green curve needs to be above the gray curve in order for yield curve inversion to become probable). Here are the 3-month and 10-year rates over the past 20 years showing these inversions preceding recessions (both in linear and log scales):



Principal component analysis of jobs data

Narayana Kocherlakota tweeted about employment making a random claim about the "steady state" employment growth that seems to come from nowhere which inspired me to do something I've been meaning to do for awhile: a principal component analysis of the Job Openings and Labor Turnover time series data ('JOLTS'):


I used a pretty basic Karhunen–Loève decomposition (Mathematica function here) on several seasonally adjusted hires time series from FRED (e.g. here). For those interested (apparently no one, but alas I'll do it anyway) the source code can be found in the Dynamic Equilibrium GitHub repository I set up. Here's the result (after normalizing the data):


The major components are the blue and yellow one (the rest are mostly noise, constant over time). I called these two components the "cyclical" (blue) and the "growth/decline" (yellow) for fairly obvious reasons (the growth/decline is strongest after 2011). It's growth or decline because the component can be added with a positive (growth) or negative (decline) coefficient. Here are how those two components match up with the original basis:



The story these components tell is consistent with the common narrative and some conventional wisdom:

  • Health care and education are not very cyclical
  • Health care and education are growing 
  • Manufacturing and construction are declining  

Here's health care on its own (which looks pretty much like the growth/decline component):


And here's manufacturing (durable goods):


To get back to Kocherlakota's claim, the "steady state" of jobs growth then might seem to depend on the exact mix of industries (because some are growing and some are declining) and where you are in the business cycle. However, as I showed back in January, total hires can be described by constant relative growth compared to the unemployment level and the number of vacancies ‒ except during a recession. This is all to say: it's complicated [1]. 

PS Here are all of the components (here's a link to my Google Drive which shows a higher quality picture):


...

Update 16 March 2017

I added government hiring, normalized data to the mean, and standardized the output of the algorithm. The results don't change, but it looks a bit better:


Here are the two main vectors (standardizing flipped the sign of the growth/decline vector):



And here's the original data, updated with the new data points released today (I subtracted the census peak by interpolating between the two adjacent entries):


...

Footnotes:

[1] Although I'm still not sure where the 1.2 million jobs per year comes from; here's the employment change year over year:
The bottom line is Kocherlakota's 1.2 million figure. The second from the bottom is the 1.48 million rate that comes from averaging the growth rate including recessions (it's almost 1.6 million for just post 1960 data). The upper line is my "guesstimate" for the average excluding recessions (2.5 million). Maybe it was a typo and he meant 2.2 million.

Tuesday, March 14, 2017

Physicists, by XKCD

I keep a print out of this by my desk.


XKCD.

Update 15 March 2017

I also keep a picture of Fourier's grave I took at Père Lachaise:


How do you know if you're researching in bad faith? A handy checklist.

Over the course of a couple hours I encountered so much bad pseudo-academic work that it inspired me to write down the little checklist of items that go through my head whenever I encounter some research in economics or other social sciences. Sean Carroll has a good one more appropriate to hard sciences:
  1. Acquire basic competency in whatever field of science your discovery belongs to.
  2. Understand, and make a good-faith effort to confront, the fundamental objections to your claims within established science.
  3. Present your discovery in a way that is complete, transparent, and unambiguous.
Actually the first two items below are basically Carroll's first two, and the following ones are more specific versions of Carroll's third.

Here are my ten criteria ...
How do you know if you're researching in bad faith?  A handy checklist.
⬜   1) You fail to connect your work to prior art in the field you are working in  
⬜   2) You fail to cite references or consider arguments contrary to your own results
⬜   3) You cite non-peer-reviewed, non-canon, or discredited references without arguing why they should be considered 
⬜   4) You fail to compare your model to other models 
⬜   5) If it is theoretical work, you fail to compare your theory to data 
⬜   6) If it is empirical work, you fail to point out the implicit models involved in the data collection
⬜   7) You fail to see or point out how your data representation may be misleading 
⬜   8) You fail to address that your conclusion is politically or academically convenient for you 
⬜   9) You fail to make your data sources and/or software accessible 
⬜   10) You fail to bend over backwards to come up with reasons you might have missed something 
One could get a couple check marks and still be all right (I'm sure several entries on this blog don't meet every one of these [1]), but 3-5 is a sign you're dealing with someone who isn't presenting research in good faith (but rather motivated to sell you something ... an idea, a product).

...

Footnotes:

[1] Every blog entry here passes 1, 4, 5, and 9 automatically via the links in the sidebar (on the desktop version site).

Monday, March 13, 2017

Models: predictions and prediction failures

I think I first became aware of Beatrice Cherrier's work on economic history in January of last year when I read Roger Farmer's post on competing terminology. I recently barged into a Twitter conversation that was started by Cherrier about critiques of economics [1], and ended up reading her blog which I recommend. I learned a lot about the history of economics with regards to prediction from this post especially.

I agree with Cherrier that the "economists failed to predict the crisis" (or as it is sometimes taken: "mainstream economists ...") trope is problematic from a scientific standpoint. I listed failures of prediction under the heading of Complaints that depend on framing in my list of valid and invalid critiques of economics. What do we mean by prediction? Conditional forecasts? Unconditional forecasts? Finding a new effect based on theory?

I like to use the example of earthquakes to illustrate this. We cannot predict when earthquakes will happen, however we can predict where they will generally happen to some degree of accuracy (along fault lines or increasingly near areas where fracking is used). The plate tectonics model that predicts earthquakes will occur mostly at plate boundaries also explains some observations about the fossil record (fossils are similar in South America and Africa up until the end of the Jurassic). Does this mean earthquakes are predictable or unpredictable? And if unpredictable, do we consider this a failure of the model or possibly the field of geophysics?

We do not yet know if recessions are predictable in any sense. For example, if recessions are triggered by information cascades among agents they could arise so quickly that no data could conceivably be collected fast enough to predict it. They'd be inherently unpredictable by the normal process of science. So you can see that an insistence on prediction is declaring certain kinds of theories (even potentially theories that are accurate in hindsight) to be invalid by fiat.

This possibility sets up a real branding problem for macroeconomics. As I have heard from some commenters on my blog (and generally in the econoblogosphere), a model that is accurate only in hindsight is not useful to most people. This does not mean such a model is unscientific (quantum field theory isn't useful to most people, either), just that a large segment of the population will think the model is failing. As Cherrier points out, this expectation was baked in at the beginning:
Macroeconomics is born out of finance fortune-tellers’ early efforts to predict changes in stock prices and economists’ efforts to explain and tame agricultural and business cycles.
I don't think we've moved beyond this expectation coming from society and politicians. I also think it will be difficult to undo (e.g. by moving towards a view of "economist as doctor" per Simon Wren-Lewis) because macroeconomics deals with issues of importance to society (employment and prices).

Prediction case studies: information equilibrium models

Prediction is an incredibly useful tool in science and can be for economics, but only if the system is supposed to be predictable. Let me show some ways prediction can be used to help understand a system using some examples with the information equilibrium (IE) model I work with on this blog.

In the first example, the dynamic equilibrium model forecast of unemployment depends on whether a recession is coming or not, and produces two different forecasts (shown in gray and red, respectively):


We can see what look like the beginnings of a negative shock (see also here about predicting the size of the global financial crisis shock). This kind of model (if correct) would give leading indicators before a recession starts.

I've used a different model to make an unconditional prediction about the path of the 10-year interest rate:


We can clearly see the shock after the 2016 election. If this model is correct, the evidence of that shock should evaporate in a year or so. This gives us an interesting use case: if the data fails to follow the unconditional forecast, that forecast acts as a counterfactual for the scenario where "nothing happens" allowing one to extract the impacts of policy or other events. 

However there's another example that's more like earthquake prediction: regions where interest rate data is above the "theoretical ideal" curve (gray) for extended periods (highlighted in green) culminate in a recession (red) much like building up snow on a mountainside usually culminates in an avalanche. The latest data says that we've started to build up snow:


This indicator (if it turns out to be correct) doesn't tell us when a recession happens, only if one will possibly happen. According to this model, the chance of recession went above zero in December of 2015.

In yet another example, I put together a prediction where I actually have no information. The information equilibrium model says that the monetary base will generally fall (or output will increase) with interest rate increases, but doesn't say how fast. Essentially, the model tells us where equilibrium is, but not the non-equilibrium process that arrives there. This use of forecasting is primarily as a learning tool:


In the model above, the base should fall towards C' (C is the Dec 2015 rate hike, C' is the Dec 2016 hike, and the likely March 2016 hike will require a C''), but when it should reach it is an unknown parameter since the monetary base has never been this large before. The prediction in a sense already has one success: the model predicted the base would deviate from the path labeled 0 in the graph.

And in this case, I used prediction performance to reject a modification (adding lags) to a model of inflation. The key point to understand here is that this model wasn't conditional, so a large spike in CPI inflation at the beginning of 2016 was sufficient to reject it:


But a good question to ask is how well can e.g. inflation be predicted? A 2011 study shows that many economic models fail to be as predictive as some simple stochastic processes (or the IE model):


Using the same methodology as the 2011 study, I tested the performance of LOESS smoothing of the data and found the best case model would probably only score a ~ 0.6 at 6 quarters out.

As we can see, prediction is a useful tool to tame the proliferation of macroeconomic models. However I would stress that prediction is not necessarily the best metric by which to determine the usefulness of models in understanding economic systems. For example, the unemployment model above is very uncertain about predicting the size and onset of recession shocks due to some basic issues with estimating the parameters of the exponential functions involved. However, if the model is just as accurate post-shock as it was pre-shock that is an argument that the model just fails to forecast during recessions (understandable for an equilibrium model). This is useful for science (and potentially policy design); it's just not useful for people who want forecasts.

...

PS All of the information equilibrium model predictions are collected on this page. I've started uploading the Mathematica codes to GitHub repositories linked here.
 
...

Footnotes:

[1] The initial thread was about defenses of the "Econ 101" (or "economism" by James Kwak, or "101ism" by Noah Smith). I wrote what could be considered a defense of "Econ 101" here. I called it Saving the scissors (in reference to supply and demand diagrams, and also a pun on Dan Aykroyd's portrayal of Julia Child on Saturday Night Live where he says "Save the liver"). I proposed that Econ 101 could be defended, but only if one pays close attention to model scope (one entry on my list of "valid" complaints against econ, another of which is prediction per the post above).

S&P 500 forecast versus data: March update

Here is an update of how the S&P500 forecast is going (the second graph is a zoomed-in version on the recent post-forecast data [black]):



Sunday, March 12, 2017

Comparing the unemployment forecast to the March data

The latest unemployment rate data was released on Friday, so I've added it to the forecast. It's just one point, so it really doesn't tell us much we didn't already know. However the latest data (black point) does not lend any support to the "no change" counterfactual (red):


The gray curve represents a potential leading edge of a negative shock in the model (these are generally associated with recessions, but could just be the end of the "Obamacare boom"). This shock counterfactual is based on the gray data points.

Tuesday, March 7, 2017

Academic norms and the Charles Murray incident

I sort of stumbled into the discussion sparked by the incident at Middlebury college via a short tweet storm that ended up getting a lot more traction than usual. It also lead to a discussion with João Eira that I said I would continue in blog format because Twitter is a difficult medium for nuance. There were several economists in my feed that brought the incident up, the issues touch on scientific methodology, and I think I can make a connection to Russ Roberts' piece [7] that has been making the rounds. Surprisingly, there's also a connection to my recently updated comment policy. Therefore I thought it germane to my blog.

So you don't have to go to Twitter, the gist of my tweet storm was that

  1. Pushing bad science on people breaks an institutional norm, so it unsurprising that people reacted by breaking another institutional norm
  2. Charles Murray is a virus of enlightenment values (I'll explain more below)
  3. If Murray was a biologist studying legumes with work of comparable quality, he wouldn't have even been invited to speak (well, unless it was a pro-legume interest group interested in his conclusions)


One thing I'd like to make clear is that I am not encouraging violence. I am saying I understand the strong negative reaction to letting Murray speak at a college that turned violent.


Breaking norms


The normal procedure in science and academic pursuits is to first learn the field of study, then do good quality work, publish in peer reviewed journals, and have those papers cited by either your own generation or younger generations [0]. In short: do useful credible work. Murray has failed to do this with his most broadly known work [1].


Normally if a major piece of your work is as discredited as The Bell Curve, you do not get invited to speak at a lunch seminar, much less at a venue with a broader audience. In this, the people inviting Murray to speak violated academic norms. I tried to come up with a good example of how badly this defies norms but was unable to come up with a real world one. It would be like inviting Martin Fleischmann to speak about cold fusion in a counterfactual world where they didn't retract their paper. Also recognize how ludicrous it is to say Fleischmann must be allowed to speak about cold fusion in the interest of open discussion. Next week, we'll have a speaker tell us 2 + 2 = 5.

In the US, we're starting to get a taste for what happens when someone repeatedly violates institutional norms, but around the world when this happens the result is usually loud protest with the potential for violence. The Rodney King verdict and the subsequent unrest in Los Angeles comes to mind. That verdict was as much of a violation of social justice norms as inviting Murray to speak at a college was a violation of academic norms.

The virus: political norms infecting academic norms

One thing to understand about science is that there is a subtle but important difference in the meaning of "open discussion". In politics, we have the "freedom of speech" norm: people are allowed to say what they want. There is no requirement to have any supporting evidence. There is also no requirement that the speaker be open to another's speech. I can publish garbage if I'd like and I don't have to listen to your criticisms.

Science developed well before the freedom of speech was enshrined in the US Constitution and differs from "free speech": you are required to have evidence and you are required to be open to another speaker's speech. These norms are enshrined in the peer review process of academic journals. If I don't present convincing evidence that my paper is correct or if I don't respond convincingly to the reviewer's questions, my paper doesn't get published. I get shut down. It's an arcane and inefficient process, but it enshrines the norms of academia.

The media is the parallel institution to academic journals that follows political norms. It publishes "both sides" of even scientific issues like global warming, and will print factually false statements by people in the spirit of freedom of speech. Editorializing, allow me to say that an institution that operates this way doesn't seem to serve much purpose unless it calls out lies and factual errors or restricts itself to philosophy.

Murray and those like him are viruses that use these competing processes to propagate themselves. Murray publishes a political tract via the media that looks superficially like science analogous to the way a virus simulates proteins that gain access to the cell's machinery. Open academic discussion begins (e.g. here [pdf]) because the subject has been brought up (i.e. the virus DNA is in the cell). Further data is collected and studies are performed that Murray can selectively cite or perform facile analyses on. This is analogous to using the cell's machinery to produce proteins the virus needs to reproduce itself. Murray however is not open to the refutations of his book. It's never retracted (analogy: programmed cell death). When academic pressure tries to right the wrong by denying him academic positions or not publishing his papers (e.g. the body producing a fever), political pressure says that Murray is being censored, that colleges are not allowing open discussion (and ideological institutions like Mercatus or AEI host him, or request his presence in academic forums i.e. exactly what happened). The subsequent academic discussion of his work enables Murray to publish in academic journals despite the low quality of his research.

Credibility, self-editing, and open dialog

I wrote up a different short tweet storm that dealt with Russ Roberts' views [7] about economic research a few days before that turns out to be related. Roberts suggested a new kind of academic openness where economists publish e.g. all of the different exploratory regressions they tried before arriving at the one they put in the paper. I said that good scientific practice dictates that you should record all of this, but that once the paper is published replication or contrary results (also published) should dictate the debate ‒ "not armchair critical theory analysis of work products" (as I said in the Tweet).

The questioning process of peer review could (and sometimes does) analyze those work products, but the peer review process primarily relies on your academic credibility to allow you to "self-edit" your notes to produce your paper. However, this assumes academic norms, not political norms.

Political norms neither subject you to peer review nor support academic credibility. In short, you cannot trust the self-editing of work produced under political norms which makes Roberts' call to produce the initial work products a reasonable suggestion. One way of re-framing Roberts claim is to say that political norm have infected the academic process in economics and therefore we should give up the traditional peer review process for public review.

I personally like this idea (in fact, I follow it with this blog ‒ I effectively publish my scientific notebook on the internet), but it requires its own norms. One is making everything available (e.g. software, data). That's another one I've followed inspired by Igor Carron and his blog dealing with signal processing and machine learning (here's the hardware and software implementations page). This takes the focus off of peer review and puts it on reproducibility.

Another norm this requires is for the resulting open discussion to be genuinely open in both directions. Reputations need to follow bloggers and blog commenters, and everyone needs to be genuinely interested in dialog and capable of accepting (i.e. being open to) criticism. As I mentioned above, I recently changed my comment policy from one that followed political norms ("free speech") to one that follows more academic norms (if you're not open to being wrong, you're shut down).

No one owns the ideas

Another area where the academic norms and political norms differ is in the treatment of the argument from authority "fallacy" with the latter being a lot more amenable to such arguments. You will see the political norm in action in economics when people appeal to what Keynes or Minsky "really said". 

However the academic norm favors argument from credibility. The name of the person is unimportant. I probably understand quantum mechanics better than Werner Heisenberg ever did. I've built up some academic credibility in quantum field theory by publishing several papers (going through the peer review process) and a doctoral thesis (going through the thesis defense process). I can credibly talk about the ideas of Feynman, Weinberg, or Witten without invoking their names. I lack "authority", but I have credibility.

The flip side of that is that no one owns the ideas. I don't need Feynman's name to support every path integral and I can take the insights of the approach into completely different subjects that Feynman did not foresee. I'm not limited to what Feynman "really said". People are free to take the information equilibrium framework I've been developing on this blog and write their own papers and blog posts.

You may be thinking I've gone far afield from taking about Charles Murray, but this is terribly relevant. If a university prevented him or the odious Milo Yiannopoulos from speaking, this would not be violating of academic norms of openness. It would be upholding the academic norm of credibility. The "ideas" [2] are not being suppressed, the speakers are. If the "ideas" are so good, get someone with academic credibility to expound them in your academic environment. In science, you don't need Einstein [3] in order to talk about general relativity. If the relationship between race and intelligence is such an interesting academic research question surely you don't need famous (but academically discredited) names to talk about it. No one owns the ideas, so someone with academic credibility can talk about them at a university if they really need to be discussed.

In a sense, that gives it away. The odious Milo Yiannopoulos in person was critical to the desires of the young Republicans at Berkeley and the AEI student group needed Charles Murray, not a seminar by one of his acolytes. They needed an argument from authority. It was political, not academic.

That's the difference between political openness and academic openness. Academic openness means you can get someone besides Charles Murray or have him talk at a non-academic venue.

Pushing the ideas of the powerful

The US has supported systemic racism since before it was founded. This is still in place (just read Ta-Nehisi Coates), and therefore it's not like racist ideas don't have a venue or a constituency. Racist ideas are a load of garbage academically, so breaking academic norms in order to push ideology supported by the powerful [4] ... well, rubs a lot of people the wrong way. We also have a president and a party that have broken a lot of norms with the continued support of CEOs and the financial industry. These norms have been broken to push the ideology of the powerful (business interests) or the dominant (white Americans over Syrian refugees). This has resulted in protests. If norms continue to be broken, we can expect those protests to become violent. I'm not advocating violence, but when Republicans spout baldfaced lies and act with rank hypocrisy and there are no consequences to the violations of norms, people will think there are no rules anymore and act accordingly [5].

It is key to understand here that it is power and money behind it. As mentioned above, it seems that academic norms have been routinely violated in economics. Business interests pushed "free markets" not because they were the best theory supported by data surviving rigorous peer review (in fact, that process showed many, many free market failures), but because they served the powerful. CEOs set up pseudo-academic institutions in an effort to infect the academy with the free market virus which spread through "open academic discussion". The global financial crisis sparked a protest ‒ and from what I've read it doesn't discriminate between the good academic economists from the infected ones [6].

If academic and social norms continue to be violated in the interests of the powerful, I fear the protests are only going to get worse.

Footnotes:


[0] Max Planck: A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.


[1] I am not going to cover the well-documented issues with Murray's "research". Suffice to say that the concepts of "race" and "intelligence" are ill-defined on their own independent of each other, and his work on The Bell Curve was not subject to peer review (or even given to potentially unfriendly reviewers). Additionally, Murray's background is in political science and history so he has limited expertise to speak about intelligence (the domain of neuroscience, biology, or psychology). Overall, The Bell Curve is pseudo-scientific garbage.

[2] I can't put enough quotation marks around the word "ideas" here so I won't even try.

[3] I hope the universe forgives me for putting these names in the same paragraph.

[4] I can hear the precious snowflakes now: "But conservative and racist ideas are an oppressed minority on college campuses." We don't discuss aether anymore, either. It's because those ideas are garbage and you're just a terrible person.

[5] In the language of information equilibrium, they will discover this new previously inaccessible (because of past norms) state space volume and occupy it.

[6] I think a shift from the virus to the zombie analogy is appropriate here as well as John Quiggin's book.

[7] I am referencing Roberts' call to produce work products, not the main point of his article which Noah Smith deals with very well. Updated 11 March 2017.

Wednesday, March 1, 2017

Ecological fallacy and emergent dynamics

Diane Coyle has a review of a new book on statistics for a general audience. It's Truth or Truthiness: Distinguishing Fact From Fiction By Learning to Think Like a Data Scientist by Howard Wainer. It sounds fun and is definitely seems like the kind of book needed in today's data environment.

One of the things Diane writes about in the review is the ecological fallacy:
I also discovered that one aspect of something that’s bugged me since my thesis days – when I started disaggregating macro data – namely the pitfalls of aggregation, has a name elsewhere in the scholarly forest: “The ecological fallacy, in which apparent structure exists in grouped (eg average) data that disappears or even reverses on the individual level.” It seems it’s a commonplace in statistics ... Actually, I think the aggregation issues are more extensive in economics; for example I once heard Dave Giles do a brilliant lecture on how time aggregation can lead to spurious autocorrelation results.
Now I am not 100% sure I read this correctly, so I'm not going to attribute this interpretation to Diane. However, the way this is written could be taken to impugn the macro structure: this is not the meaning of the ecological fallacy.

The ecological fallacy states that observed macro structures do no imply anything about the micro agents. It does not say that the converse is true, i.e. the lack of agents behaving consistently with the macro structure implies the macro structure is spurious (it may or may not be).

I think a good example here is diffusion. The macro structure (an entropic force pushing density to become e.g. a uniform distribution) does not imply that individual molecules are seeking out areas of low density. Individual molecules are just moving randomly. A graphic from the Wikipedia article on diffusion illustrates this nicely:


However, the random motion of individual molecules does not make us question the validity of the macro observable diffusion. In a sense, all emergent properties would be suspect if this were true.

But Diane also said she noticed that macro structures tend to fall apart when disaggregated; this is exactly what we'd expect if macro and economic forces are entropic forces like diffusion. I've already noted that nominal rigidity (sticky prices and wages) appears to be the result of entropic force [1] (nominal rigidity appears in aggregate data, but isn't true for individual prices). We can see e.g. Calvo pricing as a "microfoundation" for something that doesn't exist at the micro level ‒ much like (erroneously) creating a density depended force for individual molecules in diffusion. I also showed how consumption smoothing, transitive preferences, and rational agents can arise from agents that fail to meet any of those properties.

Essentially, the issues with the ecological fallacy should be ubiquitous in economics if it really is about entropic forces and macro is different from aggregated micro.

...

Footnotes:

[1] Possibly even more interesting are causal entropic forces; this formulation can make inanimate objects appear to do intelligent things. I constructed a demand curve from them here. As I noted in the first link in this footnote, the causality may be deeply related to Duncan Foley and Eric Smith's observation that the real difference between economics and the physics of thermodynamics is the former's focus on irreversible transformations (agents don't willingly undo gains) and the latter's focus on reversible ones (for e.g. experiments).