Saturday, October 10, 2015

The global productivity frontier and dark matter

Alex Tabarrok cites a study that says productivity may not be growing overall, but is growing for firms on the "global productivity frontier". It does seem a bit strange to select a subset consisting of the most productive firms and then saying these firms have high productivity growth. While not exactly contradictory (it's a finding that high productivity "stock" has high productivity "flow"), it does seem like they should go a bit farther in figuring out if this partitioning of the data makes sense.

In any case, the result isn't particularly shocking if we go back to the dark matter problem. Selecting the most productive firms is very much like the selection process going into selecting firms for a stock index. You end up over-representing the firms with large information transfer index values.

Friday, October 9, 2015

A random walk inside the simplex: unemployment and MINIMAC

Random walk in labor supply space. The dimensions would be employment at different firms or in different industries.

As part of my outline of paper #2, I put together a couple of posts that create an interesting result. I previously built a version of MINIMAC (mini macro model) as described by Paul Krugman here as an information equilibrium/maximum entropy model. One consequence of that model is that you can derive a natural rate of unemployment fairly simply.

If we treat the problem as a random walk inside the simplex (bounded by the labor supply, pictured at the top), we get a simple model of spikes in the unemployment rate (shown for a d = 40 dimensional simplex):

Occasional spikes in unemployment, that are caused by randomly moving through the employment state space.

This is to say that you could get spikes in unemployment for no reason whatsoever ... it's just randomly moving around the employment state space. I think I'd still lean towards this model, however.

Thursday, October 8, 2015

Draft paper now a preprint on the arXiv

The latest version of draft paper is now available as a preprint on the arXiv in the section q-fin.EC (Quantitative Finance, category Economics):

The category is described as
Economics, including micro and macro economics, international economics, theory of the firm, labor economics, and other economic topics outside of finance

Utility maximization and entropy maximization

Here's an outline of the second draft paper:
Title: Utility maximization and entropy maximization
Abstract: Utility maximization and entropy maximization represent two different paradigms of finding the equilibrium among N-good and T-period markets. Under certain conditions, they result in the same equilibrium. Maximum entropy and information equilibrium are used to construct some fundamental microeconomic relationships such as the asset pricing equation and the Euler equation, as well as the minimal macroeconomic model MINIMAC. We discuss the use of entropy maximization as a method to select a unique Arrow-Debreu general equilibrium, and produce a model whereby an emergent representative agent exhibits transitive preferences, monotonicity of utility and consumption smoothing despite individual agents having fluctuating consumption and unstable and non-transitive preferences.

1. Utility and information equilibrium

2. Asset pricing equation

3. Euler equation

4. Emergent representative agent

5. MINIMAC, the natural rate of unemployment and the plucking model

Wednesday, October 7, 2015

Trust is human capital that isn't excludable

Paul Romer thinks this is ridiculous:
Now, here is an alternative micro-foundation for human capital. There is a little homunculus inside each person’s head who knows everything the person knows and who has his own low-powered ham radio station. When two people come into proximity, neither of them can prevent the homunculus in each head from broadcasting over the ham radio to the other homunculus, all the things it knows. So the mere fact of close proximity causes valuable bits of knowledge, such as how to make a right angle using only a measuring rod, to flow from one person’s head to the other person’s head, which then raises the productivity of the other person as a carpenter.
More specifically, this is how he characterizes "the idea that human capital is not fully excludable. In less precise language, it justifies human capital externalities or spillovers."

His preferred view is that it is "crystal clear that human capital is a rival good and that even without any legal protection, human capital is almost perfectly excludable."

This is a common trope in arguing against something you don't think is true: make it seem ridiculous. That's behind Galileo's dialogs. It veers into a straw man argument (because no one is arguing for the homonculous).

Let me rewrite Romer's story as a more realistic mechanism:
Now, here is an alternative micro-foundation for human capital. There is cognitive apparatus in  each person’s head who knows everything the person knows and that has the capability to communicate via non-verbal and signalling cues. When two people come into proximity, neither of them can prevent hundreds of thousands of years of evolution from broadcasting all these non-verbal signals. So the mere fact of close proximity causes valuable bits of knowledge, such as status, leadership or trustworthiness, to flow from one person’s head to the other person’s head, which then raises the productivity of the other person as part of a group work effort.
I actually proposed a mechanism whereby a talented CEO arriving at a facility can raise productivity simply by engendering trust (lowering transaction costs involved in the theory of the firm). It also makes sense of why CEOs travel. If the CEO's time was so valuable (paid her marginal productivity of several hundred times other employees), she shouldn't waste time in transit to any location -- people of lower marginal productivity should travel to her.

Romer thinks this idea is ridiculous on the face of it, but it doesn't make much sense to as to why. However Romer might be the one who wins the Nobel prize on Monday, so use that to calibrate your priors.

Interest rate parity and neo-Fisherism

Scott Sumner mentioned interest rate parity today which inspired me to see what the information equilibrium model has to say about it. The basic idea is that exchange rates and interest rates  between two countries should come to an equilibrium so that if the interest rate $r$ for a period $p$ rate (I used 3-month/90-day) and the (expected) exchange rate maintain the no-arbitrage condition

(1 + r_{1}) = \frac{X(t + p)}{X(t)} (1 + r_{2})

Now the exchange rate between two countries in the information equilibrium model, assuming money supply growth rate $\mu$ (over the same period p), is:

X(t) = \alpha \frac{(M_{1} e^{\mu_{1} t/p})^{(\kappa_{1} - 1)}}{(M_{2} e^{\mu_{2} t/p})^{(\kappa_{2} - 1)}}

We can show

\frac{X(t + p)}{X(t)} = \exp \left( \mu_{1} (\kappa_{1} -1 ) - \mu_{2} (\kappa_{2} -1 )\right)

Assuming these rates are small we can do a Taylor series and match up the terms in the no-arbitrage condition so that

r_{i} \approx \mu_{i} (\kappa_{i} -1 )

This is actually kind of a neo-Fisherite result (see Sumner's post) -- higher growth (more expansionary monetary policy) in the money supply means higher interest rates. Lower growth (less expansionary) in the money supply means lower rates. The interest rate parity argument also happens to be the one that Sumner says produces the neo-Fisherite result. The version here is actually the one that makes the most sense to me, but I'm biased. Putting this back in the equation for the exchange rate

\frac{X(t + p)}{X(t)} \approx \exp \left( r_{1} - r_{2} \right)

Is this true? Well, maybe ... if you squint really hard ...

Blue is the RHS of the equation above and yellow is the LHS. But overall it seems exchange rates are way too noisy for this to be useful.

Tuesday, October 6, 2015

Causing you to do X versus giving you the option to do X

Peter left a comment saying that he didn't believe changes in the monetary base caused anything, and I thought I'd promote my response to a post. I said:
... I disagree that the monetary base doesn't cause anything and a different thought experiment shows how it works. Let's say I give you 900 € (and you live in the EU). You'd do one of three things: nothing, deposit some of it in your bank (which gets lent out), or spend part of it. Since we are all complicated individuals with our own motivations I can't say exactly what you'd do. The least informative prior says you'd spend 300 €, put 300 € in the bank and let 300 € sit in your wallet. On average, you'd do something with it. Very few people would just let the 900 € sit under their mattress. 
Did giving you 900 € cause you to deposit the money or spend it? Well, that's philosophical. Personally, I like to say the 900 € opened up your consumption state space and you took your own path through it ... and on average people do wander through part of it.
[I used € because dollar signs will collide with mathjax.]

The key thing to understand the state space opened up by the 900 €. Does it include higher prices (inflation)? Increased output (NGDP = N)?

This blog says yes to both if the 900 € is physical currency M and the ratio of the naive (least informative prior) information in revealing a dollar of output (log N) to the information in revealing a dollar of physical currency (log M) is greater than one (this is the information transfer index k). [2] If k is close to one (allocating a dollar of output has the same information as allocating a dollar of currency), then there is output growth but not inflation. Inflation measures widget information [1]; money measures widgets. Output is a mix of both.


[1] There's a bit of nuance to that statement, but it's a good elevator pitch version. See here and here for more.

[2] See the draft paper for more.