Assume a can opener ... [wikimedia commons] |
I was reminded of Paul Krugman's piece on evolution in an especially good comment here (I wrote about this subject here):
Caution, however. [David Sloan Wilson] himself is considered heterodox within evolutionary biology, being a vocal and triumphalist advocate of group selection and multi-level selection. Mainstream evolutionary biology is still largely individual-selectionist or gene-level selectionist (their equivalent of methodological individualism). So we have a case of a heterodox biologist egging economics on to heterodoxy.
I'm not going to re-hash the evolution argument (which, as with any methodology, is always a question of "what is it good for?"). I do think discussion of Krugman's definition of economics is worthy of a post, though. Here's Krugman:
Let me give you my own personal definition of the basic method of economic theory. To me, it seems that what we know as economics is the study of those phenomena that can be understood as emerging from the interactions among intelligent, self-interested individuals. Notice that there are really four parts to this definition. Let's read from right to left.
- Economics is about what individuals do: not classes, not "correlations of forces", but individual actors. This is not to deny the relevance of higher levels of analysis, but they must be grounded in individual behavior. Methodological individualism is of the essence.
- The individuals are self-interested. There is nothing in economics that inherently prevents us from allowing people to derive satisfaction from others' consumption, but the predictive power of economic theory comes from the presumption that normally people care about themselves.
- The individuals are intelligent: obvious opportunities for gain are not neglected. Hundred-dollar bills do not lie unattended in the street for very long.
- We are concerned with the interaction of such individuals: Most interesting economic theory, from supply and demand on, is about "invisible hand" processes in which the collective outcome is not what individuals intended.
Emphasis in the original. The problem is that Krugman essentially assumes the form of the solution to the problem. Talk about assuming a can opener.
- We do not know for a fact that there is little dimensional reduction in macroeconomic theory. We also do not know for a fact that all macro statistical regularities directly relate to micro agent parameters -- this presumes there are no macroscopic "entropic forces" that arise from properties of the distributions of agents. Essentially, this kind of approach, if applied in thermodynamics, would not correctly describe the stickiness of glue or osmosis ... because of an assumption. I think there is a lot of dimensional reduction and micro agent parameters tend to have little importance (these are kind of the same things) -- but I don't know for sure.
- This oddly assumes both a) aggregation of self-interested behavior has macro consequences, and b) aggregation of deviations from self-interested behavior has no macro consequences. It's especially odd since we don't know the answer to either question either way. No one has successfully aggregated agents that successfully describe a macroeconomy empirically where the details of the agents matter.
- Phase space added to an ideal gas is quickly occupied. Is the gas intelligent? A model where humans are randomly wandering (dither) through an economic state space space predicts a certain occupation of economic states (grabbing those 100-dollar bills). Maybe real humans occupy those states faster than dither would predict. That would constitute an empirical test of the role of intelligence. As yet, we have no answer ... so why assume one?
- Supply and demand can arise from properties of the opportunity set (economic state space) alone with random ("irrational") agents. The invisible hand seems amenable to treatment as an entropic force. Why assume it comes from the detailed parameters in the interaction of agents?
So you can see -- Krugman's definition of economics presumes a particular form for the answer. My opinion is that Lee Smolin's definition is better, and doesn't presume as much:
[Statistical] economics is the study of the collective behavior of large numbers of economic agents.
I'd like to give an even better definition:
Economics is the study of simplifications in the collective behavior of a large number of agents.
The invisible hand is just such a simplification. I think interest rates, inflation and output might be similar simplifications -- leading to real predictive power in economic theory.
And I'd add: The study of the complexities in the collective behavior of a large number of agents is sociology.
...
Update 1/21/2016
This result is hard to square with Krugman's assumption of intelligence:
And I'd add: The study of the complexities in the collective behavior of a large number of agents is sociology.
...
Update 1/21/2016
This result is hard to square with Krugman's assumption of intelligence:
We found that many of the so-called conditional cooperators are confused and do not seem to understand the public-goods game
"The invisible hand seems amenable to treatment as an entropic force. Why assume it comes from the detailed parameters in the interaction of agents?"
ReplyDeleteAdam Smith mentions the Invisible Hand three times, I have read. The one time I know about derives not only from the self-interest of capitalists, but also from the difficulty they faced in investing abroad. That is why their self interest benefited their home economy. :)
I think it is a good analogy even if it's use today is not exactly what Adam Smith said it was. The idea just screams entropic force. An atom ends up on the other side of a room not because of any desire to go explore, but simply because of the maximum entropy state of uniform density ... an entropic force of diffusion ushering it to parts unknown ...
DeleteOh, I think that entropy is operating in Adam Smith's example. One is more likely to do what is easy to do than what is difficult to do. We do not have to ask so much about incentives or reasons. :)
DeleteGood post Jason. You tied a bunch of your favorite themes together as usual: I like how you tie in your old posts that I'd forgotten about.
ReplyDeleteO/T: this article from 2012 by physicist David Deutsch, was linked to by Brad DeLong today, and it ties in tangentially to your post here and the one on pessimism from a few days back (where you mentioned free will). I found it to be on the longish side for what I got out of it. I suspect he's correct that artificial general intelligence (AGI) is possible (he implies inevitable... because of the "universality of computation"). However, he seems to think that everything we've done to get there is barking up the wrong tree (except perhaps the creation of computing hardware). He says nature failed too (except with us), so apes and chimps (which he dismisses as "behaviouristic") aren't even close (however he leaves open the possibility, that a breakthrough could happen). He thinks we need some kind of revolution in philosophy... something Popperian (he's a fan of Popper) but I guess by implication even better (since apparently Popper isn't good enough, or we'd already have this breakthrough). "Creativity" is what our algorithms are missing he says.
He makes some good points and I agree with much of it, but I think this paragraph exemplifies the bit that leaves me a bit skeptical of his main point (i.e. the as yet missing "creativity" in anything that's not a human brain, and which we'll figure out through philosophy):
"An AGI is qualitatively, not quantitatively, different from all other computer programs. The Skynet misconception [that self awareness is the key to AGI] likewise informs the hope that AGI is merely an emergent property of complexity, or that increased computer power will bring it forth (as if someone had already written an AGI program but it takes a year to utter each sentence). It is behind the notion that the unique abilities of the brain are due to its ‘massive parallelism’ or to its neuronal architecture, two ideas that violate computational universality. Expecting to create an AGI without first understanding in detail how it works is like expecting skyscrapers to learn to fly if we build them tall enough."
I don't know... I don't necessarily hope that AGI is an emergent property of complexity, but he didn't convince me to rule that out. (I'm thinking of your PC's "phase transition" here, Lol).
Also he writes off 'evolutionary algorithms' as the wrong way to achieve AGI, and maybe he's correct. However, if we ever get to a time when our machines design, manufacture and repair themselves, with little or no human supervision, I think it's fair to say that AGI or not, we're toast: we've passed the torch at that point, and evolution takes over.
DeleteDeutsch expresses universality of computation as if he proved it is a real property of our universe.
DeleteAs far as we know, we need the real numbers to describe quantum mechanics and there exist real numbers that are not (finitely) computable and you can't just leave out the non-computable ones. Quantum computers do not help, and what are needed are hypercomputers.
Maybe the real theory of the universe does involve only computable numbers. But we don't know what that is, and universality critically depends on it.
I think there is some conflation of a view of the universe as "computing" when physical processes happen (therefore implying a circular pseudo-existence proof ... we assume the universe computes the result of a physical process by the physical process happening, so there exists at least one computer that can do the calculation -- the universe -- but that is an assumption ... classic question begging).
That is to say universality is a conjecture about the universe.
DeleteAlso this brings up something that I talked about on Twitter in reference to something Noah Smith put up about George Ellis ...
https://twitter.com/infotranecon/status/689332372051853316
George Ellis made at least one point I agree with:
As I stated above, mathematical equations only represent part of reality, and should not be confused with reality. A specific related issue: there is a group of people out there writing papers based on the idea that physics is a computational process. But a physical law is not an algorithm. So who chooses the computational strategy and the algorithms that realise a specific physical law? (Finite elements perhaps?) What language is it written in? (Does Nature use Java or C++? What machine code is used?) Where is the CPU? What is used for memory, and in what way are read and write commands executed? Additionally if it’s a computation, how does Nature avoid the halting problem? It’s all a very bad analogy that does not work.
Something I didn't even question about the piece. Thanks for the review!
Delete"...how does Nature avoid the halting problem?"
DeleteThe blue screen of death? My brother (an RF engineer) is convinced the world will end via software failure.
This makes me think of agrarian societies who thought the world would end with plagues of locusts ...
DeleteI like vacuum state transitions myself. Which easily could be construed as software failure ... no one could test either hypothesis.
This is the way the world ends,
DeleteThis is the way the world ends,
Not with a bang but with a reboot.
;)
Jason, are you familiar with work of Farjoun and Machover?
ReplyDeleteSeems there might be some synergies there
pe
Interesting -- I will check it out.
Delete