Wednesday, January 25, 2017

Explaining the better wisdom of crowds

Exploring the state space.

Mark Thoma points us to MIT News about improving the wisdom of crowds:
Their method ... uses a technique the researchers call the “surprisingly popular” algorithm to better extract correct answers from large groups of people. ... 
The new method is simple. For a given question, people are asked two things: What they think the right answer is, and what they think popular opinion will be. The variation between the two aggregate responses indicates the correct answer. ... 
“The argument in this paper, in a very rough sense, is that people who expect to be in the minority deserve some extra attention,” ...
The interesting thing here is that this method essentially creates a measure of the degree of exploration of the "answer state space" (in economics: the opportunity set) as well as indicate possible strong "correlations". These are exactly the kinds of things you need to know in order to determine whether e.g. prediction markets are working.

Agents fully exploring the state space is critical to ideal information transfer in prediction markets. Correlations (agents cluster in one answer state) are one way to reduce information entropy; lack of exploration (no agents select particular states) is another.

In the example in the article, they ask the question "Is Philadelphia the capital of Pennsylvania?"

In a sense, the question itself sets up a correlation by anchoring Philadelphia in respondents' minds. Those that answer "no" indicate they have explored the state space (the set of all cities in Pennsylvania) -- they've considered other cities that might be the answer. Those that say others will answer "yes" give a measure of either a lack of exploration or a correlation. The people who expect to be in the minority deserve extra attention because they have more likely explored the state space.

Update 3 February 2017

Noah Smith has a Bloomberg View article about this result. What is interesting is that both his theories can be unified under a lack of state space exploration. Dunning-Kruger [pdf] is at its heart a lack of knowing there exist parts of the state space where you are wrong. You think most people will answer Philadelphia because you haven't considered that other people might consider that Harrisburg is the capital of Pennsylvania. As a non-expert, you probably haven't explored the same state space experts have. Noah's second theory is herd behavior; this is exactly the correlation I mention above.


Here's some more reading and background on information equilibrium, prediction markets, and state space exploration (what Jaynes called dither) at the links below. A slide package is here (the first few slides about Becker's "irrational" agents are relevant).


PS I always get worried when information equilibrium "explains everything". It is true that this is an indicator that the theory is correct. However, it is also an indicator of an overactive left brain interpreter.


  1. We have discussed the wisdom of crowds before. I think it is a fascinating topic and one which feels closely related to your theories. However, there are two basic aspects where my views seem to differ from yours, but where my views are still consistent with your theories (at least as far as I understand them).

    First, where is the wisdom of crowds applicable?

    I can’t see any value in thinking about simple factual questions in terms of the wisdom of crowds e.g. what is the capital of Pennsylvania? We can find the answer to these questions by looking up Wikipedia and ignoring the crowd. That raises the question of where the wisdom of crowds is useful. I’d suggest three types of question:

    Estimates: where there may or may not be a correct answer but where that answer is not known to anyone in the crowd. For example, how many glass beads are contained in a jar; what is the weight of a prize bull at a market? This type of question seems to have been what prompted the original concept of crowd wisdom.

    Multi-faceted questions / decisions: where the answer involves weighing up many facts. The crowd’s role is mostly to provide a consensus set of weightings of known facts. Example questions include: which is the best mobile phone on the market; what should be the price of Apple shares; who would make the best president?

    Forecasts: where there are no facts because the future has not happened yet. For example, who will win a sports event or a political election; what will be the level of a stock market index at the end of the year; when will there next be a major earthquake in San Francisco?

    What these questions have in common is that they result in a distribution of possible answers, and that the distribution arises from complex thought processes which are varied and can appear almost random to outsiders. That seems consistent with your view of the economy. However, it’s the distribution of the answers that is important – not the correctness, and the crowd is not divided into one set of people who know the answers and another set who do not, as implied by the MIT experiment.


  2. Second, do wise crowds provide correct answers to these questions?

    I’d say that crowds produce, at best, consensus or reasonable answers but not correct answers. Anyone (like me) who plays prediction markets knows that the market favourite often loses. Yet, people who don’t understand these markets (like economists) frequently talk about the consensus as though it represents some sort of truth. This is a semi-religious belief which is contradicted by any basic empirical observation of these markets. The probability of one candidate winning a presidential election, or one team winning a sports event, represents the consensus of market participants. Nothing more. The whole point of prediction markets (and stock markets) is to bet AGAINST the prevailing consensus when you think that the consensus is wrong. The difficulty of beating the consensus view is not based on the consensus being correct, and betting would be pointless if either the consensus view, or the view of any one participant, was always correct.

    Jason: “These are exactly the kinds of things you need to know in order to determine whether e.g. prediction markets are working”.

    What do you mean when you talk about determining whether prediction markets are “working”?

    From memory, the probability of Clinton winning last year’s presidential election was around 80% shortly before the election. Let’s assume 84%. That leaves Trump on 16%. That means that Trump had approximately a one in six chance of winning. That’s the same chance as rolling a one on a single roll of a dice. Yet, you and many economists seem to view the presidential prediction markets as having been wrong in their forecasts and having failed. No-one would say that a dice had failed because it came up with a one on a single roll.

    Any prediction probability less than one on the eventual winner, or more than zero on the eventual losers, will be, to some extent, wrong, so all prediction markets are wrong unless a predicted outcome has a 0% or 100% likelihood – in which cases there is no need for a prediction market.

    As a non-physicist, my view also seems consistent with my limited understanding of the thought experiment in physics about a cat in a box being alive or dead. When the box is closed, the best estimate is a probability. However, when the box is opened, the cat will be either 100% alive or 100% dead. The probability is based on running a theoretical experiment many times. The actual outcome represents the result of just one instance of the experiment. A second instance of the experiment might result in a different outcome. An instance of a dead cat would not be evidence of a lack of exploration of the possible outcomes by physicists. Instead, it would be generalising a theory from the single dead cat that would suggest a lack of exploration of outcomes.

    1. Hi Jamie,

      Let me answer one question first; you ask:

      What do you mean when you talk about determining whether prediction markets are “working”?

      When I say this, I don't mean a prediction market is broken if a prediction with a 20% probability came true. I mean it in the very specific sense of this post: Is the market intelligent? (also linked above).

      Some background: the original reason behind developing the information equilibrium framework was an attempt to understand and build metrics for prediction markets being tested as part of the ACE program (DAGGRE/SciCast).

      The idea was that we wanted to find the "real" probability distribution A by matching its information entropy with the information entropy of bets B in the prediction market ... the main equation of information equilibrium: I(A) = I(B).

      This lead to the results in the first couple blog posts here and to the post linked in this comment.

      When I mean "working", I mean "not failing" in the sense that agents are fully exploring the state space. The result in the post above gives us some metrics to see if agents are fully exploring the state space.

      This doesn't mean agents are right or wrong on average; it just means that the market is "working" to aggregate that knowledge, whatever it is.

    2. Jason: “I don't mean a prediction market is broken if a prediction with a 20% probability came true”

      After your reply, I searched back for a post you wrote on prediction market performance in the 2016 US election.

      In that post, you suggest that the election prediction markets failed because the probability of a Trump victory was low. That’s inconsistent with what you said in this thread.

      We would have to define a set of measurable criteria for market success / failure to judge a claim of election market failure. I don’t think that it is possible to define such criteria. Any single market is like a single instance of a dead cat in the physics thought experiment. You can’t generalise from a single instance. If you disagree, what measurable criteria did you use to determine the failure of the prediction markets for the election?

      Jason: “Even a 10% probability has to come up sometimes”

      Events with a 10% probability tend to occur 10% of the time. I am surprised by the word ‘even’. This is not a minor point. One of the major complaints about mainstream economics is that it consistently ignores lower probability events.

      Jason: “How do we interpret results like this (from Predictwise)?”

      There are two stages to the Predictwise chart for the election. First, the market gradually moved toward Clinton over the course of several months. Second, the market reversed course abruptly on the evening of the election as the results were declared. Let’s take them in reverse order.

      The election evening reversal is entirely logical. There are only two possible end states for this market: 100% Clinton or 100% Trump. Market participants realised that the markets were wrong when the actual results were declared, so markets moved rapidly from one extreme to the other. There is nothing unusual to discuss in this movement.

      The more questionable behaviour is the longer-term movement towards Clinton. You like to use abstract phrases like “lack of exploration of the state space” but what specifically would that mean in this context? What specific information sources did the markets ignore? Why did YOU not bet against the market’s view using these sources if you perceived a bias in market prices? You could have made money and corrected the market ‘failure’ at the same time!

    3. Hi Jamie,

      What I was referring to were the large swings in market prices, not the magnitudes of individual "probabilities".

      I used to watch business news a lot as a kid, and one of the regularities I noticed early on was that when a company announced an offer to buy another at say €10 per share, the price would immediately rise to e.g. €9.89.

      The question I am asking is effectively the same: what did the buying and selling of shares have to do with this result?

      Actually, Noah Smith's take on Friedman's pool player analogy is appropriate here.

      A separate issue I have has to do with DAGGRE mentioned above. Some of the intelligence questions asked were things like "Will North Korea bomb South Korea in the next year?" If the contract price was 0.03 (which was price I recall) and then jumps to 0.98 *after* North Korea bombs South Korea of what use is this "prediction market"? (As a side note, I won a lot of the fake money "bets" simply by applying prospect theory: people seriously overweight small probabilities, so I'd short all of the "few %" prices).

      Regarding your last question, the markets seemed to follow the polls. That's only a small piece of "the state space". I imagine that the procedure in the paper discussed above would have asked everyone: what answer do you think the popular answer is? And everyone would have said Clinton. The result would have been to give much lower weight to them.

      I personally did not have access to any information besides the polls or the prediction markets (living in Seattle, it was going to be a landslide for Clinton so I couldn't look to my neighbors or people on the street). I did know that the Republicans I do have contact with were going to fall in line, so I probably could have made money betting the probability would be much closer to 50/50 (per usual US elections of the past couple decades). But generally prediction markets are illegal in the US (they're considered gambling).

  3. I’ve been mulling this over and I think we are talking past each other. Perhaps I’m not being clear in what I am trying to say.

    Jason: “If the contract price was 0.03 (which was price I recall) and then jumps to 0.98 *after* North Korea bombs South Korea of what use is this ‘prediction market’”?

    In the UK, we use the term “betting markets” for these markets. “Prediction markets” is an American term. I use that term in these discussions only because it is the term that you use.

    Markets are mechanisms which help people trade. When I go to the supermarket, the objective is for the supermarket to sell me food. The objective is not to set the “correct” price of food so that economists, newspaper columnists and bloggers can talk about it.

    Similarly, the purpose of prediction markets is to allow participants to make money from each other by placing bets on future events. The success of these markets should be judged by their ability to take bets, keep the money safe, identify and expose insider trading, and pay out to the legitimate winners based on the market rules. Prediction markets do not make predictions! They merely show a consensus view of market participants – like any other market.

    Jason: “I imagine that the procedure in the paper discussed above would have asked everyone: what answer do you think the popular answer is? And everyone would have said Clinton. The result would have been to give much lower weight to them”

    As I said before, I didn’t find the example in the paper persuasive. The correct answer to their question can be found in Wikipedia so the opinion of a crowd is not useful. However, if I give the paper the benefit of the doubt, it said that knowledgeable people could anticipate the answer of less knowledgeable people.

    In prediction markets, the knowledgeable people are the people who understand how prediction markets work; who know how to make money regularly in these markets e.g. favourites often lose, betting on the favourite is a losing strategy, arbitrage can iron out discrepancies between related markets.

    The less knowledgeable people in this case are the people who think that prediction markets make accurate predictions or who think that the market was wrong when a one in ten probability event occurred. Unfortunately, that seems to include most mainstream economists.

    When people debate religion, you sometimes hear a debate that goes like this:

    Religious person: god is good
    Non-religious person: if god is good then why does he allow suffering?

    This is the wrong argument for the non-religious person to make because it concedes that the religious person has a point and moves the debate to whether god is good or bad.

    The ideas behind perfect markets, markets which always clear, markets which solve problems, markets which make accurate predictions, pool player analogies and invisible hands are the equivalent of religious beliefs. Arguing that a specific prediction market failed to make an accurate prediction is conceding that other prediction markets do make accurate predictions and, indeed, that we can measure the accuracy of a single prediction market. It is the wrong argument.

    The correct questions here would be to ask the less knowledgeable people to provide scientific evidence that prediction markets make accurate predictions starting with the set of measurable criteria they use to test that hypothesis.

    Finally, I am surprised that any government might think that a prediction market would be able to judge accurately the probability of a rare event like a nuclear strike. As you said, markets often misprice rare events. For example, last season the English Premier League (the leading club soccer tournament in England) was won by a team which started the season as a one in 5,000 chance. I read an interesting blog post about how bookmakers price up that type of market. They use similar techniques to you when you simulate markets many times. The article concluded that there is no way, even retrospectively, to price this type of rare event ‘correctly’.

    1. "Finally, I am surprised that any government might think that a prediction market would be able to judge accurately the probability of a rare event like a nuclear strike."

      I was quite a bit more than surprised. Shocked is probably a better description.

      I intuitively thought it was a bad idea; the information equilibrium framework is an outgrowth from my work to demonstrate why rigorously. Basically I tried to get IARPA to see this argument:

      They said to get it in front of some academic economists, but as a field they are ... resistant to new ideas. So I took up blogging and hoped someone might read it.

      Anyway, I think we are talking past each other a bit. Maybe my linked post titled "Is the market intelligent?" in this comment would be clarifying. The paper in the post above does a few things that would in general alleviate the problems identified in "Is the market intelligent?" ... but likely wouldn't alleviate the problem of small probabilities. The problems don't have anything to do with the actual values of the probabilities (unless they are small per your story about the Premier League above) or how they change per se. They have to do with the information content of the signals in the transactions in the betting market and about whether they fully explore the state space (ideal information transfer) or fail to do so (non-ideal information transfer).

    2. I have read the “Is the market intelligent” post several times previously but I have just read it again to make sure I’m not missing something.

      Jason (from Is the market intelligent post): “it seems the market isn't intelligent as it can't in general solve the information aggregation problem. But it seems it can solve the allocation problem -- however we can't check if that is the best or even correct answer given an objective function because of the computational difficulties noted by Shalizi at the top of this post”

      I think that one of the key differences between us is that you are a theorist and I am a practical problem solver. We don’t need complex mathematical theories to see that markets are not perfect. We can just observe actual markets. The problem is that economists do not do this. They talk in simplifications which just ignore the imperfections. This is, in part, driven by their desire to simplify the world to make problems more mathematically tractable.

      For example, supermarkets routinely overstock food and throw away what they can’t sell. Compared with a perfect market counterfactual, they increase the price of the food that they do sell to cover the costs of the waste associated with the food that they throw away. That’s a failure to solve an ideal allocation problem which is analogous to the failure of ideal information transfer problems with prediction markets. Nevertheless, supermarkets represent the state of the art in food markets just as prediction markets represent the state of the art in forecasting specific events. Meanwhile, economists never talk about the concept of waste as it has no place in their world of ideal markets and supply & demand diagrams.

      Also, it is the supermarkets who are making resource allocation decisions – not the abstract concept of the food market. Markets don’t solve allocation problems. In your terms, I’d say that they merely provide (mostly) state of the art but imperfect estimates to aggregate information problems.

      For practical problem solving purposes, we should see ideal information transfer and ideal resource allocation as unrealisable targets. However, to move towards these targets, we must challenge ourselves by asking how we can improve information transfer and resource allocation in any given situation. That is what businesses and other market participants do all the time. That is part of what we mean by innovation.

      That is why I asked you earlier why you didn’t bet against Clinton in the election markets. The key practical question is how we could make a better forecast next time. You are a very smart guy and you understand the concept of information transfer. If you can’t think of a practical way to improve the prediction of election results beyond the current state of the art then I’m not clear why you would expect anyone else, or an abstract concept like a market, to solve this problem. It makes sense to me to talk of prediction market “failure” only if you have a better practical alternative.


Comments are welcome. Please see the Moderation and comment policy.

Also, try to avoid the use of dollar signs as they interfere with my setup of mathjax. I left it set up that way because I think this is funny for an economics blog. You can use € or £ instead.

Note: Only a member of this blog may post a comment.