## Saturday, June 10, 2017

### Milton Friedman's question begging

I know I'm right in saying that Milton Friedman's thermostat is an important idea that all economists ought to be aware of. And I'm pretty sure I'm right in asserting that almost all economists are unaware of this important idea that all economists ought to be aware of.
If Nick is right about economists being unaware of it, then there's a good reason: it's not a logically valid argument.

Nick retells Friedman's parable using a gas pedal g(t) and a car's speed s(t) going up and down hills h(t). The basic premise is that if the driver perfectly achieves a target speed of 100 km/hr, then it will look like the gas pedal movements are unrelated to speed (if we don't know about the hills). The gas will be going up and down, but the speed will be constant.

The argument purportedly can be deployed at anyone using data to claim the gas pedal doesn't matter.

But how did either the plaintiff or the defense know the gas pedal had anything to do with this scenario in the first place? Friedman's thermostat assumes some prior empirically validated model where the gas pedal was conclusively proven to determine the car's speed before the "constant speed" scenario came to be.

That is to say a scientist just given the "constant speed" scenario would say there's no evidence anything influences speed. The "best model" is actually a constant

s(t) = 100 km/hr

Not

s = s(h(t), g(t), t)

If you already know g(t) matters ‒ and adaptively makes s(t) constant ‒ then, yes, the constant s(t) data doesn't disprove that. But if it is a question whether g(t) matters, then assuming g(t) matters in order to disprove claims that g(t) doesn't matter is classic question begging.

Additionally, in order to disprove claims that g(t) doesn't matter one could just deploy the original data and findings that g(t) does matter. This of course makes Friedman's thermostat superfluous.

So hopefully that's why most economists don't know about Milton Friedman's question begging, I mean, thermostat.

Now it is true that when Friedman deployed the argument [pdf], he said there was a time when the thermostat was imperfect before the analog of the "constant speed" scenario ‒ which allows one to determine the effects of g(t). However it is not an undisputed fact that g(t) was determined to affect speed (for example, post war inflation is arguably a demographic effect).

At the end of the day, it's usually best to argue about models in terms of empirical data rather than logic and philosophy.

1. "If you already know g(t) matters ‒ and adaptively makes s(t) constant ‒ then, yes, the constant s(t) data doesn't disprove that. But if it is a question whether g(t) matters, then assuming g(t) matters in order to disprove claims that g(t) doesn't matter is classic question begging."

I don't know how much of the whole conversation between Nick and I you read, but I have a few comments related to what you wrote here.

First, as I mentioned during the thread with Nick, the relationship between investment and interest rates (which was the original topic of the discussion) is clearer from the 1950's to the 1970's and then falls apart with the Volcker disinflation. Part of this might be because neither interest rates nor investment as a share of GDP moved around that much during the period meaning that your point is probably still valid, but to the extent that monetary policy became a lot more "active" in the 1980's and that made the IS curve disappear in the data I think Nick's thermostat isn't necessarily question begging in this specific case.

Another thought I mentioned in the thread is that the IS curve shifts around a lot for reasons that we don't completely understand (for some reason recessions happen so the IS curve shifts left without any specific cause, similar to the definition-less "shocks" in your dynamic equilibrium models), so it's hard to actually see the relationship in the data -- interest rates are positively correlated with data because central banks choose to cut interest rates when demand is low. So, basically, the identification problem makes the IS curve invisible (but maybe since we know the shape of the LM curve we can estimate the whole model as a system given parameters for the LM curve we found separately...?)

Unfortunately I don't have the technical skills to estimate a system of equations all at once, so for the time being I guess I'm stuck with the "natural experiment" of the 1980's that shows hiking interest rates causing a recession...

1. I did follow the conversation; I was mostly reacting to Nick's post on its own which doesn't implies you can make the argument even in the absence of a history where you already know "g(t) matters".

At the end of my post, I reference the fact that Friedman was actually using the thermostat in this way (i.e. there was a history where you could show "g(t) matters").

I think relative to your conversation, it is being used in that sense which is why I didn't bring up your conversation. However Nick's post on its own basically assumes that there is some history where "g(t) matters" which is what I have a problem with (the thermostat meta-argument, not the question of whether monetary policy matters which I only said was not indisputable).

2. Regarding the "natural experiment" of the late 70s and 80s, that isn't necessarily as cut and dried:

http://informationtransfereconomics.blogspot.com/2016/09/paul-romer-on-volcker-disinflation.html

Actually, the recent expansion of the monetary base is a cleaner natural experiment than the 80s and comes to the opposite conclusion:

http://informationtransfereconomics.blogspot.com/2014/11/quantitative-easing-cleanest-experiment.html

3. Jason,

I'm curious as to your response to the argument that if monetary policy is ultimately ineffective right now, the Fed could theoretically just buy up every asset in the economy without significantly moving the needle in terms of stimulus.

4. Anti,

One thing that would happen in the course of doing that is that transaction decisions about assets would steadily come under the purview of a single decider (the Fed), and therefore would lead to a massive fall in entropy (i.e. a major coordination).

It is difficult to say what happens in that scenario, but one possible analogy is close to a feudal society (with the Fed as the lord) which basically has an impoverished population with close to zero inflation. This is a bit how I envision the "wealth condensation" phase in this model:

https://arxiv.org/abs/cond-mat/0002374

However, I don't think that scenario would come to pass because I do think central banks and monetary policy can control inflation above ~ 10% inflation (where one basically has the quantity theory of money). If the central bank started giving out currency at an average rate well above 10% more per year, it would start to have an impact:

http://informationtransfereconomics.blogspot.com/2017/03/belarus-and-effective-theories.html

But since we don't talk about policies that will generate 10%+ inflation per year, this represents an unrealistic thought experiment. For all intents and purposes, we can assume > 10% inflation is off the table and therefore we live in a world where monetary policy doesn't matter.

5. Jason, I'm a little bit confused.

When I refer to the "natural experiment of the 80s" I'm saying the Volcker disinflation can be used to estimate the slope of the IS curve, not test whether or not monetary policy matters.

I agree that QE is a natural experiment, but all it does is confirm that the LM curve is flat at the ZLB (increasing the money supply doesn't do anything when interest rates ≈ 0), not that the IS curve doesn't exist. Remember the impetus for this whole thing is whether or not it's possible to find IS-LM in the data.

6. You're right; my mistake.

7. That's an interesting reply. At least its testable, in principle. If I ever find myself in charge of a monetary authority, I'll run A-B-A-B experiments. Market monetarism may triumph yet.

I reject to the very un-American paper you link to out of hand. It's even authored by Frenchies!

8. Anti,

You can't dismiss the paper out of hand. I mean, you can write those words, but they're meaningless because either a) you had credibility and lost it by dismissing a credible paper out of hand or b) you never had credibility in the first place!

9. Jason,

Another possibility is that my absurd dismissal wasn't serious. Hence the use of "Frenchies".

10. Yes, my response was something of a joke as well.

2. "At the end of the day, it's usually best to argue about models in terms of empirical data rather than logic and philosophy."

What do you mean here?

Surely you don't mean we can take empirical data at random and find meaning from it. You are not trying to play "guess where I got this data".

I would argue that empirical data by itself is nothing but a string of numbers. Data is given logical meaning when it can be connected to observable events and described with code that is commonly understood. (For example, something physical happens that can be described in a language that is understood by people conversant in the language used.

I agree with your argument that we can use only two of the three sets of available empirical data to reach an illogical conclusion, but, even here, we are claiming "illogical" because it is easy to show (with the help of a third data set logically associated with physical change) that a hasty consideration of insufficient data can lead to erroneous conclusions.

The constant speed of 100 km/hr in hilly terrain provides the perfect example. If we plot the car travel distance versus the horizontal distance traveled (we may need another data set), we would find a much richer relationship to throttle position.

1. "Surely you don't mean we can take empirical data at random and find meaning from it."

Surely I don't mean that because I said:

At the end of the day, it's usually best to argue about models in terms of empirical data rather than logic and philosophy.

Emphasis added. That does not in the slightest mean "take empirical data at random".

You also said:

"If we plot the car travel distance versus the horizontal distance traveled (we may need another data set), we would find a much richer relationship to throttle position."

Sure, but in the thought experiment we don't have access to any information besides the gas pedal g(t) and speed s(t). So yes: if you have a different situation, the situation is different.

3. I think that Nick Rowe is correct here and that he is making an excellent point which is an Achilles heel of modern macro. However, I would explain it differently.

When we study a natural phenomenon like gravity, we gain insight by studying the BEHAVIOURS related to the phenomenon. We cannot speak to a great designer of gravity who can explain the fundamental operation of the system. We have no alternative but to infer the rules of gravity from behavioural observations.

This is not true for man-made systems such as cars and the economy. We are the DESIGNERS (and the builders and the operators) of man-made systems. That means that we can also understand the OPERATION of man-made systems, and their individual components, by understanding the design and the controls of the systems. We can then understand the behaviours of these systems within the context of a common understanding of their design, operation and control.

When we observe the behaviour of a man-made system, we are observing a COMBINATION of the system’s natural behaviours and behaviours caused by use of the system’s controls. To make sense of this, we need to disentangle the natural elements and the control elements.

When we observe a car with constant velocity, even though it travels up and down hills, we can use the logic of gravity and the logic of our understanding of the design and operation of a car, to disentangle the natural and control elements.

When we observe the economy, we have the same problem. To what extent is what we are observing the natural behaviour of the economy, and to what extent is it the result of specific, one-off actions by policy-makers or the general population e.g. the UK vote for Brexit?

To assess policy effectiveness ‘scientifically’, we need to disentangle the natural from the one-off. However, it is not clear how we should do this. You seem to assume that the behaviours we are observing are entirely natural. Your assumption is that you can forecast the future by analysing the past. That is clearly wrong e.g. the GBP has fallen by 10-15% since last year’s Brexit vote and, as much as anything is certain in macro, the Brexit vote was a major cause of that fall.

This is not a minor problem of interest only to academics. It impacts all policy-making and all economic shocks.

The problem is not unique to macro either. For example, in the 1990s the IT industry predicted that computer systems would fail at the millennium due to a date formatting problem. As a result, businesses and governments spent billions changing their computer systems. When we reached the millennium … nothing happened. There followed a succession of op-ed articles claiming that the Year 2000 problem had been a con. This was not true but is a good example of the problems associated with following a course of action which merely maintains the status quo, when that status quo is threatened. Climate change is another example.

These problems arise because people cannot disentangle the natural behaviour of a system from the change in that behaviour brought about by the implementation of a policy or a mitigating action. In economics, there are further complications with disentangling the impact of multiple concurrent actions e.g. the Bank of England reduced interest rates immediately after the Brexit vote, so how much change was caused by the vote and how much by the bank’s action?

That is why an approach to macro which relies entirely on analysing high level statistics will not work, no matter how good the mathematics. That is why we need to use logic and other techniques alongside empirical analysis to imagine a useful version of macroeconomics. As I’ve pointed out before, that’s why real policy-making uses other techniques such as controlled trials and detailed operational analyses to try to isolate specific causes and effects.

1. I think you assume a separation between "man-made" and "natural" systems that isn't supported by evidence. Noah Smith has a good rundown of both mathematics predicting human behavior and how science isn't entirely about mathematical laws:

http://noahpinionblog.blogspot.com/2017/06/is-economics-science.html

My more emotional response to assertions that mathematical laws will never predict human behavior is: "Well, not with that attitude!"

I am not sure of any cases where perfunctory pessimism lead to any breakthroughs in understanding.

It's true that one shouldn't default to using the optimists to guide policy either, but given that the primary conclusion from using the mathematical optimism of the information equilibrium framework is policy pessimism (i.e. our efforts are vainglorious), I'm not sure I see what the problem is here.

The main conclusion I've seen from the models on this blog that work is that human decisions don't matter much except in the very short run. In that case, the models give the "natural" and "man-made" components in your separation weights of 1 and 0, respectively.

2. Jason: “the models give the "natural" and "man-made" components in your separation weights of 1 and 0, respectively”

I’m sorry that my last comment upset you. However, we have some fundamental disagreements here. Let’s go right back to your basic assumption: that the economy behaves like a gas as it is made up of millions of atomic agents whose behaviour is effectively random. I like that idea as far as it goes, but we need to be clear about its limitations.

The first is that the government is responsible (in the UK at least) for about 40% of GDP. It is also responsible for setting the laws which govern the behaviours of the rest of the system as well as trying to iron out deviations from economic trends and social moral ideals. It is also responsible for potentially destabilising actions such as redefining economic and political treaties with other countries and starting wars. That’s one hell of a big atom. It’s more akin to a sun which is orbited by millions of rock fragments – except the behaviour of this sun is currently under the control of Donald Trump and Theresa May.

Even above that level is the basic man-made design of our societies e.g. capitalism versus communism; democracy versus dictatorship; different democratic systems; sharing arrangements between nations e.g. Greece is trapped in its current economic malaise due to the man-made design of the Euro.

A further problem with gas behaviour is when the millions of small atoms behave in a way to impact the control and behaviour of the system e.g. an election or referendum result, or a social revolution, or a mass panic.

A further problem with gas behaviour is that, every so often, events come along which change the trend of the economy. For example, the industrial revolution occurred when it did because we harnessed steam power etc. The demographic trend you often discuss, relating to women entering the job market around the 1970s, happened because of inventions such as the fridge and the washing machine, and effective contraception. Globalisation was enabled by man-made designs in areas such as transportation and supply-chain management.

Similarly, effective contraception, combined with effective medical treatment in other areas, has led to an ageing population with the result that, in the coming years, a smaller working population will have to support a larger retired population.

Finally, all this economic activity has resulted in major pollution and atmospheric changes which have caused climate change.

None of these things is random. They are all man-made. There is nothing natural about capitalism or communism; putting Donald Trump in charge of the world’s largest economy; the UK voting for Brexit; the invention of the steam engine; or man-made climate change.

All these things have discernible impacts on the state of the economy. When we measure the economy, our measures include all these impacts. The fact that you can see no man-made impact on the economy means that you have no explanation for e.g. why the GBP fell by 10-15% last year – even though everyone in the UK knows that it happened because of the Brexit vote and everyone thinks that Brexit is the biggest economic issue facing the country.

The fundamental point at issue here is the nature of economics. You see it as a quest for natural rules, with deviations from trend that can be ignored, whereas I see it as a study of the causes and effects of the many deviations from trend, including one-off effects. This has nothing to do with perfunctory pessimism. I could say the same about you as you have no interest in the things I think are important e.g. Brexit and the Greek problems in the Euro area; or the consequences of different people conceiving the economy from different perspectives (only one of which is your perspective); or how real economic institutions operate and how real economic policy is made.

3. I wasn't upset by what you said, I just found your larger thesis genuinely baffling from a logical standpoint. The only way one could know if mathematical theories cannot explain macroeconomics is if one a) had tried every possible theory or b) found some sort of mathematical proof that it was impossible. Unfortunately the latter involves at least some mathematical basis to start from (I have in mind Bells theorem in quantum mechanics or 'no go' theorems in quantum field theory).

Another way to put it is that failure to imagine how it could be done is not evidence it can't be done.

In addition, you have mischaracteriszed my approach here. I do not assume the economy can be understood with mathematical laws: I assume I don't know whether or not it can.

Additionally, one of the primary tenets of the info eq approach is that humans are *different* from atoms. If humans were like atoms, info eq would reduce to thermodynamics and be somewhat boring. Info eq is a new kind of generalized thermodynamics where atoms could e.g. panic and huddle at one corner of the room.

When they *aren't* panicking, info eq says macro is at best describable in statistical terms. The day to day fluctuations are largely meaningless.

However, when the atoms do panic, info eq sets some boundaries for how the system can behave (those boundaries happen to be defined by the non-panicking states).

I've said on many occasions that those scenarios are probably sociology and psychology, not economics.

That's because I'm not modeling humans as atoms. I'm modeling them as objects with such complex behavior that they appear algorithmically random at the (macro) scale at which we measure them. I fully accept this might not work out as an approach.

Regarding government, it is true that government actions can cause humans to panick (or otherwise enter into some groupthink state). The primary usefulness of the info eq approach may be to identify those episodes and determine the relative size of the effect.

One example I have is unemployment, and when I get back to a computer instead of typing on my phone I will describe it in more detail.

4. Jason: “The only way one could know if mathematical theories cannot explain macroeconomics is if one a) had tried every possible theory or b) found some sort of mathematical proof that it was impossible”

I have said before that I am happy to accept any evidence regarding the use of mathematics. You are the maths specialist. My argument is not about maths. It’s about economics – and what is the best way to answer various economic questions.

Your approach is to use historical data to make unconditional forecasts about the future i.e.

Future = Function (Past).

However, I am saying that specific events like the Brexit vote (a policy / control event) or the industrial revolution (an evolutionary / random event) have macro effects which are not contained in the historical data. It is the event itself which triggers the change i.e.

Future = Function (Past, Major Evolutionary / Random Events, Major Control / Policy Events).

We might then have different forecasts conditional on if, and when, the major events take place. That is the underlying macro-economic problem. As we observe only one of the alternate timelines, we need some way of separating out the impact of the ‘natural’ part of the model i.e. Future = Function (Past) from the ‘man-made’ aspects i.e. Future = Function (Major Evolutionary Events, Major Control Events). Also, as there may be several major events taking place in parallel, we may need to separate out the impacts of each event. For example, in 2016, the UK voted to leave the EU AND the Bank of England responded by reducing interest rates. Also, some of the events may be unique events e.g. no-one has left the EU before, so we can’t use a historical template. Also, we need to estimate the impact of these events BEFORE we make any control decision, so that, as far as is practical, we can make an informed decision. Also, the impact of a major decision may vary depending on circumstances e.g. whether we are at the ZLB.

The problem is not mathematics. It is more like a failure to agree the questions that the mathematics is trying to answer. I think that what I am saying is in the spirit of Nick Rowe’s original post. Anyway. I think we have a major disagreement here, as our perspectives are very different, so it probably best just to leave it there.

1. It's nice that you put it in this way because it is helpful in showing how you have mischaracterized the information equilibrium framework and made several assumptions that are not true in general.

First, let me say that the idea that you can't make predictions based on historical data about a future that might include events that are not present in that historical data is not true in general. As an explicit example, we have this machine learning algorithm making predictions about future frames of a video where the future frames do not exist in the past frames.

Jumping off from your abstract functions, we'll say your second function above is (with future = t+1, past = t):

economy(t+1) = economy(t, many variables, events, etc)

The IT framework then gives you something like:

|IT(economy(t, ...)) − economy(t+1)| < Îµ

The IT framework is agnostic about the specifics of the complex system underlying the economy and so de facto includes events like the effects of Brexit or other policy choices.

Normal economics pretends to know how humans behave (the microstates) and projects their behavior to the future to find the values of economic variables (the macrostate). That is basically your first abstract equation above.

The IT model is agnostic about the microstates; it includes *every possible* microstate.

Another way to think about it is that the UK(hard Brexit) is just as much a function of microstates including people, money and firms as UK(soft Brexit) ... that is to say

UK = ∑â‚™ câ‚™ fâ‚™(x, y, z ...)

and the "soft Brexit" state is just a different combination (different values of câ‚™) of population, money, firms, etc than the "hard Brexit" state.

The IT framework asks questions about what can be answered considering all possible câ‚™. In information equilibrium, the economy seems to be more about the "opportunity set" (all possible câ‚™) than the specific opportunities selected by every agent.

In terms of the machine learning example above, the information equilibrium framework is about the things that are true if you consider the space of all possible pixel values of the future images.

2. We’re talking at cross purposes here so here is another attempt.

The most significant social value of macro-economics is to assist with policy making e.g. on Brexit. Policy makers have an objective of making Brexit a success and want to make informed decisions to support that objective. They ask questions about the probable impacts of different macro-level decisions e.g. about single market membership.

Any proposed forecasting solution must be able to answer such questions to be of value to policy-makers. Useful techniques must distinguish between the impact of each option e.g. what would be the economic impact of staying in the single market versus leaving it? That’s what Nick Rowe is saying in more general terms. It’s the difference between conditional forecasting (where there is a separate forecast isolating each policy option) or unconditional forecasting (where there is just a single forecast irrespective of any policy decision).

In short, any technique that attempts to answer a Brexit policy choice question needs to know about Brexit, the single market and the available policy options before it can carry out any impact assessment. That’s what I mean by:

Future = Function (Major Control / Policy Events)

Your technique does not know about Brexit. It uses data about the past to make an unconditional forecast. That’s what I mean by:

Future = Function (Past)

Apart from that, I have made no assumptions about ITE. In my Brexit example, I am trying to ask whether ITE would add any value to our policy-makers. The onus is on you to explain if you think it could add value. You would have to show how a forecasting technique that doesn’t know about Brexit or its policy options could help policy-makers make informed decisions about Brexit. Maybe that’s possible but you would need to provide some evidence.

You seem to think that I am criticising maths or saying that forecasting is not possible. That is not correct. My questions on policy options are not related to the technicalities of any forecasting solution.

Jason: “you have mischaracterized the information equilibrium framework”

Other than saying that your forecasting approach is based on using generic historical data rather than specific policy-related data, I haven’t made any assumptions about ITE here.

Jason: “we have this machine learning algorithm making …”

I am not disputing that mathematical forecasts can be useful. I am asking whether your models are useful for real economic policy-making given that you don’t appear to acknowledge policy options, or any sort of policy decision-making, in your models.

Jason: “the economy seems to be more about the "opportunity set” than the specific opportunities selected by every agent”

No-one is arguing that the choices of every agent are important. Brexit and the other examples I quoted in my earlier comment are macro-scale events / decisions.

You seem to think that a technique which passes your criteria for science, but is not useful for policy purposes, is preferable to a technique which is useful for policy but may not pass your criteria for science. This is a significant problem as all economic decisions are made in the knowledge that the future is uncertain.

Keynes’ approach to economics was mostly about practical problem solving and being “roughly right”. He thought that it would be better if economists were like dentists or doctors i.e. specialists at solving a narrow range of difficult real-world problems. Paul Krugman also uses very simple techniques to make useful policy-relevant insights on difficult topics. He also makes his points in a way that can be understood by at least some policy-makers and the public.

Of course, much in economics is neither scientific nor useful. However, that is a separate question.

3. I think a short way to put it is that:

Future = Function(Past)

characterizes a 'theory-free' econometric model (like a vector autoregression) that is a function of some coefficients of lags. I am not doing that.

In contrast, there is no time variable in the information equilibrium model. It is a relationship between observables. You can use data from the past (as well as forecasts) to validate the relationship but at its heart, the IT model is

Y(t) = f(X(t))

where t is a dummy variable, so we just have Y = f(X).

"You seem to think that a technique which passes your criteria for science, but is not useful for policy purposes, is preferable to a technique which is useful for policy but may not pass your criteria for science."

How can you demonstrate something is useful for policy if it doesn't pass the criteria for science?

If it doesn't pass the criteria for science, then it is policy for some fantasy world that may or may not have any relationship to the real world. This is the problem with a lot of economics (mainstream and heterodox). It makes up weird little thought experiments (like Friedman's thermostat/Nick Rowe's speedometer) and then tries to use them for policy without comparison to data.

Really, just replace the word "science" with "connection to reality". That's all science is in the end -- a method to connect ideas with reality.

Let me rewrite your sentence with that replacement:

You seem to think that a technique which passes criteria for connection to reality, but is not useful for policy purposes, is preferable to a technique which is useful for policy but may not pass criteria for connection to reality.

To which I would answer, emphatically: Yes, it is definitely preferable! Policy relevant economic theory that didn't connect to reality is how we got free market ideology, capitalism run amok, and reduction of government services!

4. Jason: “How can you demonstrate something is useful for policy if it doesn't pass the criteria for science?”

I have just seen your previous reply. We are still talking at cross purposes. The quote above is wrong for all sorts of reasons. Very briefly, here are a few.

In as much as policy-making resembles any science, it resembles medicine. The core methods are things such as the creation of conceptual models to understand major causes and effects; understanding best practice in the policy area e.g. in other countries; controlled / low volume trials to assess the emerging understanding of causes and effect; consultation to get the views of the people impacted by the policy; close monitoring and adjustment of each implementation; learning from experience; understanding and managing risks. Essentially, policy-making is an experiment.

However, policy-making is mostly about decisions and decisions are not science. Decisions involve weighing up facts i.e. deciding which facts are important and which are not. The weightings are always subjective and always reflect the bias of the decision-makers. A weighting of zero means that a fact is ignored. This is true no matter what the decision and no matter who is the decision maker. For example, an economist might argue that a certain policy would increase GDP. However, an environmentalist might point out that the policy would have an adverse impact on the environment. The policy-maker must weigh up the relative importance of the potential benefit to GDP against the risk to the environment. That is always a subjective judgement particularly when some aspects are difficult to quantify.

Another aspect of policy-making is decision-making governance i.e. who makes the decisions and what are the rules around how the decision is made? For example, the US constitution seems to make it difficult to make decisions without a broad consensus. This requires compromise which doesn’t fit with any scientific view of a “correct answer” even if such a thing existed. That’s why consensus builders are often the best policy-makers, while academics who see everything in fundamentalist black and white terms, or who don’t think through the practical implications of their ideas, are mostly hopeless.

In as much as mathematical modelling is used, it must be able to assess different policy options. You have still not answered how your method would do this.

5. Jason: “Policy relevant economic theory that didn't connect to reality is how we got free market ideology, capitalism run amok, and reduction of government services”

Free market ideology in economics starts with supply & demand curves and clearing markets. These do not exist in the real world in the way they are presented in economics. The whole debate about mainstream versus heterodox is that mainstream economics is built on assumptions that free market ideology is correct. That’s why mainstream economics is so hopeless at discussing why pathologies occur. In addition, as I have said before (quoting Joseph Stiglitz), mainstream modelling is based on representative agent models which assume that aspects such as diversity, inequality and private-sector debt do not exist. We don’t need your mathematical models to understand that free market ideology and mainstream economics are bullshit.

Finally, free market ideology varies a lot between countries so blaming any candidate science is unfair. The US (and to a lesser extent the UK) are right-wing countries. You have effectively only two political parties – one would be regarded as centrist in European terms, the other extremely right-wing. For example, the US is 70 years behind the UK in providing universal healthcare. That has nothing to do with any science. Culture, geography and history are far more important. The UK is not exempt from this either. We have a different attitude to the EU, in comparison with our continental neighbours, for reasons that are primarily geographical and historical. We have been debating our relationship with the rest of Europe since the end of WWII without any sign of reaching a consensus.