Altometrics Blog

Intelligence Amplification, Data Visualization

Superforcasting

-

The Art and Science of Prediction by Philip E. Tetlock & Dan Gardner

The authors claim that by properly selecting questions and keeping predictors accountable, certain events can be predicted more accurately than chance.

The book describes an experiment that Tetlock designed like a forecasting tournament. He and his team selected questions that were time limited and verifiable so that the scoring wasn’t arbitrary.

They found that most people work with a rather crude likely/unlikely scale of prediction. Some add 50/50 to get a three point scale. The people who did the best in the tournament were the ones who broke down the likelihoods into smaller chunks and some were able to feel (or tell, discern), say, the difference between a 60% and 61% chance of something happening.

Super-forecasters were good at imagining the opposite of the question. Say the question was “Will the US federal-funds rate be increased before the end of the year [2015]?” a good way to attack the question is to just answer it “Yes” and think of arguments for it to be raised and then answer it “No” and consider what would have to happen for the rate to not be raised. This, when done well, can help you break free of assumptions that may blind you to the truth.

Another successful way to tackle a question is to look for the base rate, so if you are trying to predict if a certain film will pass a certain amount of gross ticket receipts by a certain date, look at the past performance. If, for example, it is trending downwards and the area under the curve is less than the amount you must predict, then you are saying that something is going to happen to cause an uptick in the remaining weeks. In this example you can directly quantify the needed uptick and look for reasons that might happen (e.g. on Thanksgiving weekend, movies have historically gotten a bump)

In the tournament, the authors found that a short, 30 minute orientation was enough to give a randomly selected group of participants a 10% accuracy bump over a group that didn’t receive the orientation. The essence of the orientation was these “10 commandments”:

  1. Triage: Focus on questions where your hard work is likely to pay off
  2. Break seemingly intractable problems into tractable sub-problems
  3. Strike the right balance between inside and outside views
  4. Strike the right balance between under- and overreacting to evidence
  5. Look for the clashing causal forces at work in each problem
  6. Strive to distinguish as many degrees of doubt as the problem permits but no more
  7. Strike the right balance between under- and overconfidence, between prudence and decisiveness
  8. Look for the errors behind your mistakes but beware of rearview-mirror hindsight biases.
  9. (When working on a team) bring out the best in others and let others bring out the best in you.
  10. Master the error-balancing bicycle
  11. oh, and
  12. Don't treat commandments as commandments

Superforecasting is basically Moneyball for real-life. It will be interesting to see if, as they continue these tournaments each year what regression to the mean will look like. The authors talk about this, saying that if the answers to these questions truly are random than over time everyone will eventually even out in their predictions. But if there is skill involved then people will be able to be better and worse predictors. Unless we hold people accountable for their predictions, we can’t know which is true.

The authors have set up a public forecasting tournament anyone can participate in at https://www.gjopen.com. Additionally there is a lot of free information at http://www.superforecasting.com