Monday, October 17, 2016

The Numeric Solution, October 15, 2016


Editor's Note: At Stratfor, we are primarily qualitative analysts, but we build our analyses on a base of quantitative factors. This essay explores the role of assigning numeric probability — the quantitative — in forecasting, a tool we seldom use. In the coming days, Stratfor Vice President of Strategic Analysis Rodger Baker will discuss the ways in which applying quantitative figures to qualitative judgments can be misleading, as well as our own efforts to create more clarity in our forecasts.
By Dan Gardner and Philip E. Tetlock

One simple change to forecasters' standard operating procedure could boost forecast accuracy, increase accountability, reduce misunderstandings and miscalculations, and generally make the world a wealthier and safer place. The change? Use numbers. No more saying "it is likely," or "improbable," or "to be expected" or "all but certain." Instead, say there is a 60 percent, 23 percent, 78 percent or 95 percent chance. That's it. If pundits, journalists, economists, intelligence analysts, geo-strategists and others who prognosticate for a living switch to numbers, they would do nothing less than improve humanity's collective foresight.

Based on experience, we suspect readers had one of two reactions to the preceding argument. One reaction is to frown. "Isn't it obvious that numbers are preferable? Why wouldn't forecasters use numbers? What's the big deal?" The second reaction is also to frown, but for a different reason. "Nonsense! Reality is too complex and fuzzy for precise numbers. It's a delusion to think you can quantify everything."

We suggest those who had the first reaction look around a little more carefully. Most forecasting is not explicitly labeled as such. Indeed, it is often only implicit in analyses and judgments. In that forecasting, language dominates. A broad movement away from language to numbers would indeed be a huge change. And yes, it would be a big deal.

Much of the language used in forecasting is horribly vague. "Donald Trump could win." "The British pound may collapse." Read literally, these statements mean almost nothing. Or if you prefer, almost anything: You may have dinner tomorrow night; you could be crushed by a meteor before finishing this column. And yet, forecasts of "may" or "could" seldom prompt hoots of derision because the forecaster and his or her audience seldom take them literally. Instead, they use context, tone and body language to suggest a more precise meaning. Someone who leans forward, opens his eyes wide and says, "The British pound may collapse" is not saying the probability lies somewhere between a fraction of 1 percent and almost 100 percent. He is saying the probability is high enough to be alarming. That's still vague, but not so vague as to appear ridiculous. So people accept this language.

They shouldn't. The potential it creates for misunderstanding is vast. And forget about accountability. A forecaster who says, "Donald Trump could win" will always be right. If Trump wins, you can be sure he will infer a higher probability in his forecast than he had in mind at the time and declare himself correct. If Trump loses, you can be equally sure he will underscore the literal meaning of "could" and declare himself correct. It's not that he's dishonest. It's that he's human.

But the worst damage done by hopelessly vague verbiage is something else entirely.

The Danger of Ambiguity

Forecasting is a skill, and both common sense and abundant research tell us that the only way to improve a skill is to practice. Think of the basketball player shooting free throws over and over and over. But for practice to be effective, there must be clear, timely feedback. The basketball player gets that. If a throw bounces off the rim to the right, he sees that immediately and will adjust his next throw accordingly. But forecasters routinely get ambiguous feedback, especially if they use vague language. On Nov. 7, the forecaster who says, "Trump could win" will learn no lesson. He's like a basketball player shooting free throws in a gym at night with the lights out. He does not get clear feedback. His skill will not improve.

Of course, words like "could" and "may" are extreme in their plasticity. And to be fair, serious forecasters often use more defined language — phrases like "highly likely" or "very improbable." But that changes little. Research has shown that people take even apparently precise terms like "highly likely" to mean widely different things. And that is dangerous. In our book, Superforecasting: The Art and Science of Prediction, we discuss the famous case of a National Intelligence Estimate that concluded there was "a serious possibility" of a Soviet invasion of Yugoslavia in 1951. The lead author, the legendary Sherman Kent, thought the phrase was concrete and meaningful. So did the team of analysts who agreed to use that language. And no one objected when the estimate was sent to the State Department, Pentagon and White House. But a casual chat with an official made Kent realize that he and the official had a completely different sense of what "a serious possibility" meant. So Kent went back to his team and asked each person what they thought it meant: The answers ranged from 20 percent to 80 percent. Kent was shocked, and he became an advocate of using numbers to express probabilities.

Less misunderstanding, more accountability, improved skill, greater forecast accuracy and all the benefits that flow from a more accurate perception of the future: The case for using numbers is strong. But there are objections that must be answered.

Staying Aware of Numbers' Limits

One that doesn't deserve a lot of attention is the almost aesthetic revulsion some have to describing reality with numbers. True, numbers lack a certain artistry. But as Kent memorably put it, "I'd rather be a bookie than a goddamn poet."

A far more serious concern is that numbers create undue confidence by giving an estimate a bogus scientific veneer. Con men and blowhards know how that works. "Most stats are made up" sounds like empty bloviation, whereas "83.5 percent of statistics are meaningless" impresses with precision. Put together many of these numbers and you may even convince yourself you have reality and the future all figured out — until your hubris is demolished by something horribly unexpected.

We have a lot of sympathy for this view. It's hard to dispute that numbers are too often treated like totems. But the answer, surely, is not to avoid numbers, but rather to avoid treating numbers like totems. If a forecast is a subjective estimate expressed in numeric form, say so: "After careful consideration, my best guess is that there is a 77 percent chance it will happen" is absolutely clear about what it is and what it is not. And it is no different than "I think it is likely to happen" — except that it eliminates possible misunderstandings, and, when aggregated with other such forecasts, it makes accountability possible and generates clear feedback.

Finally, there is the metaphysical objection that when making a subjective estimate people simply cannot comprehend our complex and uncertain reality well enough to meaningfully distinguish between, say, a 77 percent probability and 70 or 65 percent. In this view, the elasticity of terms like "probably" is good because it allows them to stretch across a wide range of probabilities, which is the most precise resolution we flawed humans are capable of. To use precise numbers is to fool ourselves into believing we can do better.

But that's an empirical claim. It can be tested. And it has been, in the research program we discuss in Superforecasting.

A Complement to Intuition

We discovered that the best forecasters tend to be extremely fine-grained forecasters. They don't use 20, 30, 40 percent and so on. Or 20, 25 and 30 percent. Their scales read 20, 21, 22 percent…. Is this hubris and delusion? When we rounded their forecasts to make them less precise — so a 72 percent forecast, for instance, became 70 percent — we found they got less accurate. That means that when superforecasters think carefully and decide that 72 percent is closer to the mark than 70 percent, chances are they are right.

Of course, this proof only applies to the sorts of questions (Will Scotland vote to leave the United Kingdom? Will Russia seize Crimea?) and time frames (from weeks to more than a year) involved in our research. But that research was sponsored by the U.S. intelligence community and it was designed to probe the sorts of big, important issues the intelligence community has to tackle. How many other domains are there where these levels of precision can be achieved? We can only know if we test on a large scale.

This may be where the greatest benefit of numeric forecasting lies. Not only can individual forecasts and forecasters be improved by using numbers, numeric forecasts can be aggregated. From aggregation, broader analyses and insights may come. So may new ways of making accurate forecasts — like the "extremizing algorithm" we developed, which won the intelligence community's forecasting tournament.

We have made some big claims here. But please note that we are not the sort of numbers people who think quantitative analysts with computers will or should take over. Quite the opposite. While it's a safe bet that in the future quantitative analysts with computers will play a much bigger role in forecasting than they do now, experience and good judgment will not become the obsolete tools of a bygone age. As we are already seeing in fields where the progress of computers and statistical analysis is particularly advanced — think of chess and baseball — people haven't been put in museum display cases. They're still at work. The difference is that now they combine the new tools with experience and judgment to come up with something better — something that neither machine nor man could produce alone.

Experience and judgment will always be essential. We need to make the most of them. And that starts by switching to numbers.

The Numeric Solution is republished with permission of Stratfor.

No comments:

Post a Comment