Diane Coyle has taken her crusade against GDP (as a measure) to the NYTimes OpEd page. I don't have much issue with it in general, however she claims:
Growth forecasts for gross domestic product in the United States at the end of this year vary from about 1.75 percent to 3 percent — a good measure of the lack of consensus.
Does this reflect a lack of consensus? I did some simple linear extrapolations in the IT model and looked at the error in the estimate of 2015 Q4 RGDP (i.e. propagating error in the prediction of PCE inflation and NGDP to RGDP) to get a handle on how big a 125 basis point variation stacks up. Here are the extrapolations:
That's 1-σ (standard deviation), so we'd expect it to be outside that range more than 30% of the time. What we end up with is an estimate between 0.7% and 4.9% with roughly 70% confidence -- a 420 basis point error band.
Even casual observation of the raw data tells us the 125 basis point spread (black rectangle on the second plot) is remarkably tight. My 420 point band would be more indicative of a lack of consensus!
The truth is that precision to less than a percentage point may not even be possible. It seems Diane Coyle is setting up economic forecasts to fail.
Should I be impressed that in the bottom RGDP plot the gray data line never seems to exit the orange zone defined by +/- 1 sigma bands from the model?
ReplyDeletePretty good, huh?
DeleteActually I think I used the monthly variance for the inflation, when I should have used quarterly. However it was dominated by the NGDP error anyway.
People like Coyle won't be satisfied until we have 100% GDP imputed based on models. Then we can have the nirvana of forecasting GDP with precision using DSGE models because the data will have been "imputed" using those models. Right now, 16% of GDP is imputed, which is a record high. Imputed is basically made up stuff based on questionable modeling, and often times the result is something that defies common intuition.
ReplyDeleteI think that Coyle's sense about the variability of the predictions is in no small part a result of the lack of empiricism in economics. Of all the online graphs of economic projections that I have seen, the only ones aside from those on this blog that have error bands are those by the Bank of England. Economists seem not to pay too much attention to errors. (Surely econometricians do, but where are their graphs?)
ReplyDeleteI agree with Diane Coyle here. I think the disagreement is a cultural one. In science / academia, facts (or pseudo-facts like forecasts) are an end in themselves. In business / government, facts are tools for use in decision making. There are some consequences of this.
ReplyDeleteFirst, the decision making value of a fact can be thought of as the difference in the quality of the decision between having the fact to hand and not having it. If there were no GDP forecasts, we would assume that tomorrow’s GDP would be the same as today’s GDP give or take a little. That’s true most of the time but not all of the time. If mathematical models can also only suggest that tomorrow’s GDP will be the same as today’s GDP give or take a little, there is no added value to the decision from having the model. Added value would arise if the mathematical forecasts could highlight exception conditions e.g. an impending recession, but that may not be possible.
Note that this is not a criticism of mathematical models. Mathematical models are what they are. Nobody is expecting them to be more precise than possible.
Second, all complex decision making involves “weighing up” a number of competing facts. Facts often point decisions in different directions, so the only way of making a decision is to assign each fact a weighting indicating its relative importance in the decision making process. We define a number of options for the decision and assess each option using a formula like:
Option assessment score = (fact 1 * weighting 1) + (fact 2 * weighting 2) + (fact 3 * weighting 3) + …
Even though facts are (mostly) objective, the weightings are ALWAYS subjective. Really smart people in business and government know this and know that they can game any decision by influencing the weightings attached to each fact. If a fact has a weighting of zero, it has no influence on a decision.
This is why we get groupthink. A group of like-minded people, e.g. religious zealots or mainstream economists, attach similar weightings to facts. As a result, they draw similar conclusions and make similar decisions. They then decide that these decisions are objective. They’re never objective and that is the case irrespective of the decision makers and the decision. When Paul Krugman says that “facts have a liberal bias”, he should really say that “facts to which liberal economists in the USA attach a high weighting have a liberal bias”. Physicists understand this which is why they rarely get involved in public policy decisions on, say, energy policy. Unfortunately, economists do not understand this. They do not recognise their own bias and they grossly underestimate the difficulty of changing the biases of political decision makers and the general public.
Most people are sceptical about mathematical forecasts of the economy, so they will give such forecasts a low weighting in their decision making process, particularly if the models are based on data that is highly summarised, inaccurate, out of date and constantly subject to revision. This is even more true when models ignore facts that people think are important e.g. fairness, and when they make unrealistic assumptions e.g. all-knowing representative agents.
Diane Coyle is asking important questions about which facts we should use in economic decision making and what weightings we should attach to different facts.