Psychologist Philip Tetlock’s warning that “the average expert is roughly as accurate as a dart-throwing chimpanzee” has assumed Talmudic weight as the world awaits the results of next month’s U.S. presidential election. Donald Trump’s victory in the 2016 vote — after elections guru Nate Silver said he had “only” a 29% chance of winning the electoral college — was widely viewed as a stake through the heart of statistics-led political punditry and fuels a belief among the right and the left that his re-election is imminent.
A more accurate conclusion is that most observers misunderstood and misapplied Silver’s analysis: Silver was saying that in roughly one out of every three elections Trump would win. That failure wasn’t the fault of experts and pundits alone. Rather, it reflects shortcomings that are built into the way we think about the world — and are capable of being fixed (to some degree.)
Given that virtually every decision is to some degree a prediction — “where shall we go for dinner?” makes assumptions about a future experience — it is remarkable just how bad we as a species are at it. Ancient civilizations credited the crazed and addled for insight, or were willing to seek destinies in entrails, excrement or the stars. We’ve advanced considerably in technique — although horoscopes remain popular — but the future remains as unknowable as ever.
Nobel Prize-winning economist and psychologist Daniel Kahneman argues that we make sense of a complex world with “heuristics” — mental shortcuts or rules — and biases derived from experience that allow us to substitute simpler problems for more difficult ones. Unfortunately, his work demonstrates that those shortcuts frequently don’t work as intended. We are rarely equipped with the numeracy or the patience to apply them properly and as a result we poorly anticipate correctly what the future will bring.
That failure assumes greater significance when we move from decisions of personal importance to those of policy. Tetlock and J. Peter Scoblic note that “every policy is a prediction,” one that posits a causal relationship between means and ends. Every policy choice argues that “doing X now will lead to Y outcome.” For example, tax cuts will stimulate economic growth or building this missile will promote peace. Getting it wrong can be deadly. The COVID-19 crisis is a failure of imagination — a failure to accurately forecast the future — and as a result government planning and budgets reflected traditional definitions of “security” and left us struggling when a novel virus emerged. Getting cause and effect right in international affairs is especially tough, but it is getting exponentially harder as the world transforms and the mental maps that policymakers use for their analyses — their heuristics — are quickly outdated.
The COVID failure has policymakers scrambling to fill the gap. In their latest effort Tetlock and Scoblic advise policymakers to combine probabilistic forecasting and scenario planning for the best results. It isn’t an intuitive mix: The two approaches make fundamentally different assumptions about the future. Scenario planners start with the belief that there are so many possible futures that plausibility, not probability, is what matters. They then try to identify those futures. Decision-makers complain that understanding the contours of those imaginings isn’t enough. There needs to be some sense of likelihood, which is where forecasters enter the picture: They try to calculate the odds of possible outcomes. Tetlock and Scoblic conclude that a holistic approach that combines the two “would provide policymakers with both a range of conceivable futures and regular updates as to which one is likely to emerge.”
Intriguing as this may be — the newest issue of Foreign Affairs is titled “What are We Missing? Predicting the Next Crisis” — far more interesting and more relevant to most readers is Tetlock’s earlier work on forecasting that explains our inability to predict the future and offers suggestions on how to improve.
I may be digging my own grave, but I have to start with one of Tetlock’s most important conclusions: Experts are especially bad forecasters. Not only are their predictions no better than random guesses, but they are often outperformed by amateur news junkies. According to Tetlock’s research (and borrowing from Isaiah Berlin), “foxes” — individuals who know a lot about many different subjects, accept complexity and are open-minded and curious — invariably outperform “hedgehogs,” those who dig deep and know a lot about a single thing. The best forecasters accept uncertainty, and continually assess, update and revise their analyses. They are not wedded to conclusions or belief structures, and they constantly search for clues and analogies, which may not be obvious, to inform their logic and reasoning.
Ironically, Tetlock found an inverse correlation between fame and accuracy; the most famous experts had the worst records of prediction. Among other factors, he blamed the media, which demands short, simple and compelling stories, devaluing the nuance that lengthens and complicates a narrative — and which invariably renders it more accurate. As Tetlock explained, experts “are just human in the end. They are dazzled by their own brilliance and hate to be wrong. Experts are led astray not by what they believe, but by how they think.”
There is hope. People can be taught to be better forecasters. A critical skill is numeracy: Success demands an understanding of statistics and probabilities and the ability to use them properly. To go back to 2016, the odds on a Trump victory were long but statistics still indicated that he would win one out of every three elections — not one out of every three votes. It’s a simple example but it is revealing about how many people think.
Equally valuable is the wisdom of the crowd, or at least similarly oriented teams. Open-minded thinkers use teams to provide insight into areas which they might not master, and as counterweights to their biases, reasoning and conclusions. Success depends on the avoidance of “group think” that neuters doubt and silences criticism. While conclusions must be reached, it is important to see all sides of a question and moderate “certainties” by incorporating opposing views.
Finally, there are ways to structure inquiries to improve accuracy. One important step is deconstructing questions into smaller discrete queries that may be easier to answer. Think of the future as a series of steps and anticipate the likelihood of each. Breaking down the future will also help separate the knowable from the unknowable and save time (and credibility) on the issues that can in some way be ascertained. Being able to ascribe probabilities to those intermediate steps is also valuable. Assess your record and try to learn from success and failure. Good forecasters are constantly updating data and processes. And, don’t try to look too far into the future: There are too many variables to have confidence in judgments that peer too far ahead.
Self-awareness of limits and flaws, especially biases, is essential. Once that understanding is internalized, practice. It is worth the effort. Tetlock administered Department of Defense-sponsored forecasting competitions open to the general public — and the top performers had scores that were 30 percent better than those of career CIA analysts with access to classified information.
Of course, foresight doesn’t guarantee that insights will be used. Organizations have many ways, some bold, some banal, to obstruct the use of even the most accurate predictions, a phenomenon that stretches back more than 2,000 years. Google “COVID Cassandra” — nearly 36 million hits — for proof.
I predict that we will continue to be flummoxed and frustrated by the future. Moreover, we will repeatedly be blindsided by events that some will have anticipated but our leaders ignored. Does that get me a banana?
Brad Glosserman is deputy director of and visiting professor at the Center for Rule Making Strategies at Tama University as well as senior adviser (nonresident) at Pacific Forum. He is the author of “Peak Japan: The End of Great Ambitions” (Georgetown University Press, 2019).
In a time of both misinformation and too much information, quality journalism is more crucial than ever.
By subscribing, you can help us get the story right.