🔎

Filmed on Monday November 23, 02015

Philip Tetlock

Superforecasting

Philip Tetlock is author of Superforecasting: The Art and Science of Prediction and Expert Political Judgment: How Good Is It? How Can We Know?. He is a professor at the University of Pennsylvania and co-creator of The Good Judgment Project; a multi year study on crowd forecasting of world events.

The pundits we all listen to are no better at predictions than a “dart-throwing chimp,” and they are routinely surpassed by normal news-attentive citizens. So Philip Tetlock reported in his 02005 book, Expert Political Judgement—and in a January 02007 SALT talk.

It now turns out there are some people who are spectacularly good at forecasting, and their skills can be learned. Tetlock discovered them in the course of building winning teams for a tournament of geopolitical forecasting run by IARPA—Intelligence Advanced Research Projects Activity. His brilliant new book, SUPERFORECASTING: The Art and Science of Prediction, spells out the methodology the superforecasters developed. Like Daniel Kahneman’s THINKING, FAST AND SLOW, the book changes how we think about thinking.

Philip Tetlock is a professor at the University of Pennsylvania. With his co-researcher (and wife) Barbara Mellors he is running the Good Judgement Project, with its open competition for aspiring forecasters.

All it takes to improve forecasting is KEEP SCORE

Will Syria’s President Assad still be in power at the end of next year? Will Russia and China hold joint naval exercises in the Mediterranean in the next six months? Will the Oil Volatility Index fall below 25 in 2016? Will the Arctic sea ice mass be lower next summer than it was last summer?

Five hundred such questions of geopolitical import were posed in tournament mode to thousands of amateur forecasters by IARPA—the Intelligence Advanced Research Projects Activity--between 2011 and 2015. (Tetlock mentioned that senior US intelligence officials opposed the project, but younger-generation staff were able to push it through.) Extremely careful score was kept, and before long the most adept amateur “superforecasters” were doing 30 percent better than professional intelligence officers with access to classified information. They were also better than prediction markets and drastically better than famous pundits and politicians, who Tetlock described as engaging in deliberately vague “ideological kabuki dance."

What made the amateurs so powerful was Tetlock’s insistence that they score geopolitical predictions the way meteorologists score weather predictions and then learn how to improve their scores accordingly. Meteorologists predict in percentages—“there is a 70 percent chance of rain on Thursday.” It takes time and statistics to find out how good a particular meteorologist is. If 7 out of 10 such times it in fact rained, the meteorologist gets a high score for calibration (the right percentage) and for resolution (it mostly did rain). Superforecasters, remarkably, assigned probability estimates of 72-76 percent to things that happened and 24-28 percent to things that didn’t.

How did they do that? They learned, Tetlock said, to avoid falling for the “gambler’s fallacy”—detecting nonexistent patterns. They learned objectivity—the aggressive open-mindedness it takes to set aside personal theories of public events. They learned to not overcompensate for previous mistakes—the way American intelligence professionals overcompensated for the false negative of 9/11 with the false positive of mass weapons in Saddam’s Iraq. They learned to analyze from the outside in—Assad is a dictator; most dictators stay in office a very long time; consider any current news out of Syria in that light. And they learned to balance between over-adjustment to new evidence (“This changes everything”) and under-adjustment (“This is just a blip”), and between overconfidence ("100 percent!”) and over-timidity (“Um, 50 percent”). “You only win a forecasting tournament,” Tetlock said, “by being decisive—justifiably decisive."

Much of the best forecasting came from teams that learned to collaborate adroitly. Diversity on the teams helped. One important trick was to give extra weight to the best individual forecasters. Another was to “extremize” to compensate for the conservatism of aggregate forecasts—if everyone says the chances are around 66 percent, then the real chances are probably higher.

In the Q & A following his talk Tetlock was asked if the US intelligence community would incorporate the lessons of its forecasting tournament. He said he is cautiously optimistic. Pressed for a number, he declared, “Ten years from now I would offer the probability of .7 that there will be ten times more numerical probability estimates in national intelligence estimates than there were in 2005.”

Asked about long-term forecasting, he replied, “Here’s my long-term prediction for Long Now. When the Long Now audience of 2515 looks back on the audience of 2015, their level of contempt for how we go about judging political debate will be roughly comparable to the level of contempt we have for the 1692 Salem witch trials."

More Seminars

🔎

SALT Summaries Book

$2.99 Also available as a paperback book

Condensed ideas about long-term thinking summarized by Stewart Brand
(with Kevin Kelly, Alexander Rose and Paul Saffo) and a foreword by Brian Eno.

Seminar Sponsors

David and Abby Rumsey • Kim Polese • The Kaphan Foundation • Garrett Gruener • Scorpio Rising Fund • Peter Baumann • Brian Eno • Greg Stikeleather • Cameo Wood • Ping Fu • Peter Schwartz • Lawrence Wilkinson • Ken and Maddy Dychtwald • Future Ventures • Ken and Jackie Broad • AtoB • WHH Foundation • Stewart Brand and Ryan Phelan • Jackson Square Partners Foundation • The Long Now Members

We would also like to recognize George Cowan (01920 - 02012) for being the first to sponsor this series.

This is the legacy site. Return to the new site.