QFT: The Platonic ideal of a framework. |
TL;DR = No.
This pair ([1], [2]) of interesting older Noahpinion (feat. Roger Farmer) pieces showed up on Twitter recently; in [1] Noah Smith opens with:
You often hear defenders of the DSGE [Dynamic Stochastic General Equilibrium] modeling framework say something like this: "You can't attack DSGE. Its just a framework. You can use it to model anything." But this isn't true. If it were true, we wouldn't have the label "DSGE", we'd just call it "modeling".
I've talked about frameworks many times (see e.g. here, here, here, here, here or here), and while I would agree with Noah's general sentiment, adding two words makes the definition of framework acceptable:
That is to say your framework should be general enough to tackle any given problem, but not so general that it is consistent with any set of empirical data. This is the idea here:
How can you figure out what a framework is? Imagine you're given an economic question. Now ask yourself if there is something you immediately write down to start solving it. Is there something? That's your framework.
You can start to try to solve it, but it doesn't mean you'll get an answer consistent with empirical data. Or as I mention here:
One way to understand what a framework is is to ask whether the world could behave in a different way in your framework ... Can you build market monetarist model in your framework? It doesn't have to be empirically accurate (the IT framework version is terrible), but you should be able to at least formulate it. If the answer is no, then you don't have a framework -- you have a set of priors.
Also, a framework shouldn't tell you what major phenomenology of the system you are studying is; for example, most of the frameworks here define what a recession is (instead of coming up with methodology to study how they work).
To some degree, DSGE passes these tests. Real Business Cycle (RBC) models as well as New Keynesian (NK) models can be expressed as DSGE models. But there is one test that DSGE fails (or at least fails to the best of my knowledge):
Theoretical frameworks organize well-established empirical and theoretical results. Using that framework allows your model to be consistent with all of that prior art.
DSGE basically encapsulates (or really hangs off of) general equilibrium theory. However, does anyone know what empirical success it captures? This is not the same question as whether DSGE has any empirical success. We are asking for accepted empirical results that were used to build the DSGE framework.
DSGE was originally built as Kydland-Prescott as a foundational model for RBC theory. It wasn't built on accepted empirical results successfully explained by theory. NK added sticky prices -- which aren't observed at the micro level.
Basically, DSGE doesn't look like it does one of the things a framework is supposed to do: capture prior art that explains empirical successes. So it's not a framework for macroeconomics.
Jason, what's your favorite dumbed down example of a framework "capturing prior art that explains empirical successes?" And for that example can you point out what is:
ReplyDelete1. The framework
2. The "prior art"
3. The "empirical successes" explained by the prior art
4. How this prior art explained it
5. In what sense it is "captured" by the framework
?
I can think of one but I may be wrong since I don't know much about these subjects ... But I just watched a documentary on Boltzmann.
So statistical mechanics = framework.
Thermodynamics = prior art.
Ideal gas law an example of an empirical success.
SM captures it in the sense of explaining what entropy is and why it behaves the way it does.
How's that?
What I mean by "dumbed down" is something more accessible.
I would say things like Boyle's law and other empirical results are the "prior art.
DeleteStat mech organizes those prior results and let's you tackle e.g. black hole thermodynamics (string theory degrees of freedom). Or really any other system with a large number of degrees of freedom.
But really QFT is just quantum mechanics + special relativity ... Not much more complicated than stat mech.
DeleteBTW, is that your notebook?
ReplyDeleteIt was my one of my retro-iPads.
Deletehttps://en.wikipedia.org/wiki/Hipster_PDA
Lol...
DeleteBTW, that BBC documentary I mention above wasn't bad: it was on order and disorder as it related to the history of science. They covered complexity arising from an entropy gradient, Boltzmann's explanation of entropy, Shannon's definition of information and Maxwell and his demon.
I wonder if you can calculate a "behavioural factor" based on the probability of a system with O(1e9) agents experiencing a significant spontaneous decrease in entropy vs the frequency that actually takes place in real life? They mentioned in the documentary that Boltzmann, responding to critics, calculated an expected time it would take a half cubic centimeter container of gas to return to its initial condition given you allowed it to expand into a full cubic centimeter at t=0. They didn't say what that time was, but implied it was clear even at the time that it was much longer than the solar system could be expected to last.
I wonder if you can calculate a "behavioural factor" based on the probability of a system with O(1e9) agents experiencing a significant spontaneous decrease in entropy vs the frequency that actually takes place in real life?
DeleteProbably not -- the spontaneous falls seem to be non-ideal information transfer. Calculating it from agents would probably get you near zero probability. I look at it in the paper (via the fluctuation theorem).
What if you expressed it as an equivalent number of truly random agents experiencing proportionate drops in entropy with the same expected frequeny? Maybe that number is 10, for example. Then you'd have a behavioural degree of 1e9/10 = 1e8 for collections of real humans...
DeleteWell, rather than continuing to make stuff up, I'll just go and reread your paper... I missed that part apparently.
I don't think it could be expressed as a scalar factor. It's more complicated ...
DeleteA smaller "effective" agent population would have larger fluctuations ... but upward as well as downward. That isn't what is observed.