Friday, June 12, 2015

Euler's theorem, rival inputs and distinguishable particles

Update: Romer responded to Waldmann while I was still writing this post. I added comments below in brackets. Overall
Paul Romer seems to think Euler's theorem is like the second law of thermodynamics. Robert Waldmann put it more succinctly than I ever could [and Romer agrees that the analogy wasn't complete, see the response for more]:
I find this odd, because Euler's theorem is math while the second law of thermodynamics is a scientific hypothesis
A minor quibble: I would have said successful scientific hypothesis ... it follows from similar assumptions to the ones this blog makes about economics, by the way. Except humans can make coordinated decisions (panic, herd, etc) which causes economic entropy to fall much more than would be allowed by the fluctuation theorem (that is to say, the second law of thermodynamics is actually violated in small systems, but the violations in the second law of "econo-dynamics" are too big to be a result of something like the fluctuation theorem).

The real thing that Romer says Andolfatto violates is the idea that if you double all rival inputs, output should double. [Romer points this out again in his response.] Romer points to Vollrath's very good explanation of the basic logic. This seems completely reasonable, but is wrong in general. 

Here's an example. Let's say the only industry on Earth involves searching sequenced DNA for markers. Everyone does this with X computers and produces Y output (locations of DNA markers). These computers are rival, there is perfect competition and the algorithms ("ideas") are all the same. Typically a good parallel algorithm follows Euler's theorem -- you double the number of computers, you double output. However, there are actually two massive super-linear speed-ups you can get when increasing the number of computers -- when the size of the problem becomes less than the total RAM available on all of the computers together and another when the size of the problem fits into the total memory cache (I think it is it L4 cache). The number of disk accesses (or RAM accesses) decreases as you approach this amount of memory, and during this transition from many to zero accesses the speed of your calculation goes up faster than linearly and therefore your output goes up faster than linearly. For some X, 2X computers produces (2 + δ)Y output (because it does it faster). And δ can be huge -- δ ~ 10 or 100! A computer architecture that has a memory hierarchy like this can be super-linear across several orders of magnitude. Yes, if you allow for α >> 1, X → α X does result in Y → α Y, but such a large α is not necessarily empirically relevant.

The issue here is that Euler's theorem assumes you're doubling something that's non-interacting. The computers in the example above interact: their memory structures combine to open up a path to greater efficiency that isn't available for smaller X.

Actually, you can get an interesting violation of "constant returns to scale" in physics -- it's called Gibb's paradox and it refers to entropy failing to be extensive (the physics term for constant returns to scale) using a naive entropy formula. Take two boxes and their entropy is twice the entropy of one box (call it 2S):


Putting these boxes together raises the entropy by a small amount to (2+δ)S according to the naive formula (that I use again in the graph at the bottom of this post).


Separating them reduces the entropy back to 2S -- in violation of the second law!


The resolution of this paradox for the physical system is that the formula fails to take into account the particles are indistinguishable -- you can't tell if a Helium atom that starting in the left box is in the left or right box when you separate them. Making that correction makes the entropy extensive (i.e. restores constant returns to scale).

However! In economics, with the exception of money (and similar abstract assets), we don't have indistinguishable particles. Amy is different from Bill, so you can tell when their companies merge and then split up (like the pictures above) who works for which company. In physics you can in principle find this information out about the atoms, however it costs energy to do so and ends up raising the temperature of the system to identify every atom. In economics, this information is pretty cheap to get.

Additionally, the formulas for thermodynamics are all evaluated at large values of the number of particles (N). Really large, like 10²³. However, for large but more economically relevant scales (10⁶), we still haven't reached the limits ... in fact, if you look at the entropy function log N! ≈ N log N - N you'd have "increasing returns to scale" with the sum of the production function exponents being > 1:


What if that is all secular stagnation is? An approach to the limit where economics is extensive, there are constant returns to scale for rival inputs and all economic growth ... halts. Maybe growth economists are studying the eventual future of economics ...

Anyway, in conclusion, the things that can allow increasing returns to scale are interacting rival inputs, distinguishable rival inputs and finite rival inputs. If you think of rival inputs (capital) as an infinite frictionless fluid of indistinguishable molecules, then, yes, Romer seems to be basically correct. However competitive equilibrium with price taking makes only one of these assumptions (the infinite one) -- in particular, it doesn't make the indistinguishable assumption. That's an important one if entropy is related to output. Amy and Bill could be identically productive, the same in every way. However, just allowing for them to have names means that the entropy (and hence output) of Amy, Bill, Catherine and Dave (who are also the same as Amy and Bill) can be more than just twice the output of Amy and Bill.

18 comments:

  1. So surprising (to me), to have come to this conclusion by entropy....one of the most common assumptions in formal economics is anonymity, I have a minor quibble therefore when you say that competitive equilibrium doesn't require it, I think in most models it either is required or is tacitly baked in by making indexed firms otherwise indistinguishable...when we teach about monopolistic competition we emphasize branding as a form of breaking competitive equilibrium...agents in new monetarist search models may be unique but acting on information other than their type is assumed costly by making the odds of interacting with them again arbitrarily small...but yeah I'm with you anonymity is often a dumb assumption ....although I couldn't tell you the names of any of the traders who traded today on any of the stock exchanges

    ReplyDelete
    Replies
    1. Hi LAL,

      Anonymity and an index of firms that are "the same" (Fi = Fj in some way) not quite the same as in-principle indistinguishablity. I should have been more explicit in the post above that the idea has a specific meaning in physics.

      As I mentioned above, money is the closest to being an indistinguishable particle in economics ... but dollar bills have serial numbers. It is impossible in principle to put a serial number on a Helium atom. Any macroscopic object -- or even things represented by macroscopic objects like the states of electrons in a bank of (nanometer-scale) transistors in a computer representing a dollar in your bank account -- is distinguishable.

      Another analogy is that if economic agents were indistinguishable, you couldn't tell if you gave me money or I gave you money ... and it wouldn't matter!

      The key point is even though it may be costly to figure out the identity of every stock trader, it is not an impossible task given the laws of physics. The result is that an entropy term that looks like:

      S ~ N log V

      has a correction factor where you divide the number of states by the number of ways they could be exchanged (the over-counted states don't exist) ... N!, so the the entropy gets a term ~ - log N! ~ - N log N so that S becomes:

      S ~ N log V - N log N = N log (V/N)

      This makes entropy extensive (V → α V, N → α N, then S → α S) ... i.e. obey Euler's theorem.

      Delete
    2. I see your point, but I promise the real world never gets in the way of economic modelling...there is no way for instance to know who traded with whom in the new monetarist search models...every agent is assigned a real number but the event of two people meeting is not even a measurable event...

      Delete
    3. I shouldn't even say assigned a real number...they simply exist as a continuum...

      Delete
    4. That's pretty interesting -- such a system is basically a field with no ultraviolet cutoff (minimum size of fluctuations in the field) and the distribution of random fluctuations would have an ultraviolet catastrophe just like in the days before Planck with black body radiation. That is to say it is inconsistent with random noise traders trading any money (because they would trade an infinite amount of money).

      Delete
    5. Lol, im usually pretty good at following along your physics examples....this one has me stumped though...I will have to Wikipedia...

      Delete
    6. This comment has been removed by the author.

      Delete
    7. In the days before quantum physics, the calculations for the energy given off by a hot object (e.g. the red glow of a hot heating element) came out infinite. That was because if you uniformly distribute random energy across an infinite number of light wavelengths (λ) that can have any energy they end up being mostly "ultraviolet" -- actually gamma rays -- and the red hot iron would be irradiating you. Obviously this was wrong. Planck applied the solution that light only comes in energy E = h c/λ and got the result that things glow red, then yellow, then white and finally blue.

      I always used that as an example of everyday experience with quantum mechanics back when I taught classes.

      So my intuition is that if you have random fluctuations in a continuum of traders (random fluctuations of the "trader field" as opposed to the electromagnetic field) and randomly distribute value across those trades with a uniform distribution, most of them would be high value trades and you'd end up with an infinite amount of money.

      Now in the NK models, equilibrium probably comes from some sort of optimization instead of a "maximum entropy" argument so they don't really suffer from this ultraviolet catastrophe -- there aren't any random traders, just rational agents.

      Delete
    8. does it matter how the fluctuations are random? ...in the new monetarist models I'm talking about, the agents are usually choosing to accept/not accept money and to produce/not produce some good each at the flip of coins with probabilities p1 and p2...

      Delete
    9. If they are accepting a unit of money and/or producing a unit of goods, then the value is already "quantized" -- in the picture in my comment above, agents could produce as much or as little or buy as much or as little as they wanted.

      Delete
  2. This comment has been removed by the author.

    ReplyDelete
  3. If I recall correctly, the evolution of these models ran into some trouble precisely in making the optimal cash holdings determinate....then they developed the rather strange trick of alternating between these search like markets with centralized markets....but that transition happened before my time...

    ReplyDelete
  4. Thanks. You addressed a question I had not yet formulated about an analogy with Maxwell's Demons in economics. :)

    But I don't quite get the question of naming and indistinguishability in regard to fermions. Aren't fermions like identical twins? Not completely indistinguishable -- else we would have Bose-Einstein statistics? And therefore nameable? Then in the case of uniting and then separating the boxes, we don't know which twin is which (and, even though they have names, they aren't telling ;)).

    ReplyDelete
    Replies
    1. Hi Bill,

      Yes, fermions are identical particles ... there are two fundamental indistinguishable particles (fermions and bosons) that follow Fermi-Dirac or Bose-Einstein statistics.

      You can also have indistinguishable particles without specific statistics (or at least where spin-statistics isn't important to the system at the scale you're looking at ... typically molecules at normal temperatures and pressures). This is essentially the picture when you fix Gibb's paradox above.

      What I am talking about in the non-extensive "economic" entropy functions are distinguishable particles -- particles that could be named if you set your mind to it. Normal entropy in physics doesn't address this too much (except possibly in mesoscale physics) because its usually not relevant to physical systems.

      Delete
    2. Well, I think that the principle of the identity of indistinguishables produces Bose-Einstein statistics. E. g., you can't tell the difference between HHHT, HHTH, HTHH, and THHH, because you can't distinguish the H's from each other.

      Anyway, it's a fairly philosophical question. :)

      Delete
    3. Hi Bill,

      Spin-statistics is not the same as whether something is distinguishable. Your example could be fermions or bosons ... I'll do two states because it is easier:

      Symmetric boson state with indistinguishable states HT and TH
      |TH> = |HT> = (|H>|T> + |T>|H>)/2

      Anti-symmetric fermion state with indistinguishable states HT and TH
      − |TH> = |HT> = (|H>|T> − |T>|H>)/2

      note that the observable state = and the minus sign cancels.

      Delete
    4. HTML fail ... last line should be:

      note that the observable state is〈HT|O|HT〉=〈TH|O|TH〉and the minus sign cancels

      Delete
    5. Many thanks, Jason! :)

      Delete

Comments are welcome. Please see the Moderation and comment policy.

Also, try to avoid the use of dollar signs as they interfere with my setup of mathjax. I left it set up that way because I think this is funny for an economics blog. You can use € or £ instead.

Note: Only a member of this blog may post a comment.