World

How U.S. intelligence creates games to improve its forecasts, with Canadian help

The U.S. intelligence community runs a prediction market where forecasters compete for prognosticative supremacy — it looks like a golf tournament leaderboard, only instead of birdies and bogeys, people are ranked by how correctly they call coup d'états and counterinsurgencies. And Canadians dominate.

Canadian forecasting teams have dominated the game

The intelligence community has made predictions on the U.S. election, the war in Syria, Brexit, and now, the NAFTA negotiations. (Carolyn Kaster/Associated Press)

Amir Bagherpour already has a detailed set of charts predicting how everything will play out in the NAFTA negotiations, even though they don't actually start for another few weeks.

He makes predictions for a living.

The U.S. intelligence community runs a prediction market where forecasters across government compete for prognosticative supremacy — it looks like a golf tournament leaderboard, only instead of birdies and bogeys, people are ranked by how correctly they call coup d'états and counter-insurgencies.

Bagherpour was one of them. He was a U.S. State Department analyst under the Democrats and made predictions about things like Israeli-Palestinian peace, the Syrian conflict, Colombia's negotiations with the FARC rebels, and the counter-ISIS campaign.

His predictions are often bang on. He believed Donald Trump might win the presidency. He wrote a paper five years ago that predicted Bashar al-Assad would cling to power, with Syria's conflict spiralling into a stalemate defined by religion. Sometimes they miss the mark: he gave Brexit a one-third chance of success.

The administration he served took an active interest in the science of forecasting: "[Barack] Obama would ask, 'Where's the prediction market on this [topic]?"" says Bagherpour, who now runs a consultancy, Global Impact Strategies.

"Will Leader A in Country B be removed from power, by Date C?" That would be the type of question.- Seth Goldstein, who runs the forecasting competition

The U.S. intelligence community has created more than a half-dozen forecasting programs over the last few years through its research unit, the Intelligence Advanced Research Projects Activity (IARPA), modelled after the older Defense Advanced Research Projects Agency (DARPA) that helped create the Internet.

One example is an ongoing tournament between hybrid teams combining humans and machines. It's based on evidence that the best forecasting comes from a combination of computer algorithm and human guidance.

"We love the concept of forecasting tournaments," said Seth Goldstein, who is running IARPA's human-machine Hybrid Forecasting Competition. He's limited in what he can say about the tournament, but offers one example of how it works.

We might ask, "'Will Leader A in Country B be removed from power, by Date C?' That would be the type of question ... We see what techniques work, and what techniques don't work. These tournaments (give us) a pretty good indication."

Canadian team dominates

Participants come from all walks of life, in academia and industry, and receive a stipend for taking part. But there are no rewards for accurate predictions. That's the lingering legacy of an old controversy, which forced a project to be shelved and the Pentagon boss running it to resign.

The source of controversy: a terrorism futures market. Created after the 9/11 attacks, participants were allowed to place bets on the occurrence of future terrorist acts — which critics viewed as tasteless, at best, and as a dangerous perverse incentive at worst.

The program was swiftly cancelled in 2003.

The initiative was reborn with a new generation of projects years later. And Canadians played a major role in the resurrection.

The team that dominated the first IARPA tournament was co-created by Philip Tetlock, a researcher, author, and University of Pennsylvania professor who was born in Toronto, and raised in Winnipeg and Vancouver.

His team beat a control group by a whopping 60 per cent and 78 per cent in the competition's first two years starting in 2011. It was so lopsided they ended the competition, and Tetlock's team continued alone.

The U.S. government has just released the data collected from his team to help future researchers.

Some secrets to successful forecasting are quite simple, Tetlock says. He includes a so-called Ten Commandments in his book, Superforecasting: The Art and Science of Prediction, co-authored with Canadian writer and public servant Dan Gardner.

Canadian behavioural scientist David Mandel, who works for Canada's Department of National Defence, had a 94 per cent accuracy rate for predictions he made in 2014. (The Canadian Press/Adrian Wyld)

Trick is to doubt yourself

One trick: doubt yourself. Assume your prediction is wrong, ask why, and incorporate that doubt factor into your assessment. Another is to tackle a problem in pieces — break the question into bite-sized chunks.

I'm not talking about people who have Nostradamus clairvoyance properties. We're talking about people who are better at assigning realistic odds to everything.- Philip Tetlock, researcher and author

For example, Tetlock's book cites a love-starved London forecaster who wants to determine his number of potential mates. He takes the local population figure, divides it by two for gender, isolates an age range, the likely singles population, the university-educated percentage, and finally the percentage he will likely attract and be attracted to.

His conclusion: he has 26 potential mates in London.

Tetlock says great forecasters use this approach. These are the people who score highest on the zero-to-two Brier scale, the standard unit for measuring predictive success.

"I'm not talking about people who have Nostradamus clairvoyance properties," Tetlock said in an interview.

"We're talking about people who are better at assigning realistic odds to everything. Does that mean they're going to see everything — that whenever history hits a sharp corner they're going to be able to see around the corner? Absolutely not. There are limits on foresight. It helps to be smart. It helps to be well-informed."

Another Canadian provided expertise as the U.S. created IARPA's programs.

David Mandel is a behavioural scientist who works for Canada's Department of National Defence and works to measure the accuracy of forecasts within the Canadian government, notably its elite Privy Council Office Intelligence Assessment Secretariat.

He presented research at a workshop for the U.S. government in 2009 and was among a few researchers cited in a report prepared for the U.S. Director of National Intelligence as it set up its forecasting competition.

A public example of his work is a paper he co-authored in 2014 that examined 1,514 forecasts from the PCO unit that found an impressive 94 per cent accuracy rate for predictions on whether events were more or less than 50-per-cent likely to occur.

He's trying to get his own country to build a prediction market.

Blend of game theory, expert surveys, data

"I have been in discussion with managers in [Canada's] intelligence community about that kind of issue," Mandel said in an interview. "It's just at the discussion stage. But I sense more enthusiasm than I've sensed since I began this — which was about a decade ago."

So, what about NAFTA?

Bagherpour used a blend of game theory, expert surveys, and data run through the software his company created to produce charts filled with predictions.

They concluded: NAFTA will survive; there won't be a trade war; the deal will be rebalanced slightly to reduce the U.S. trade deficit; the U.S. will open negotiations with hardball demands, then soften them to reach a deal.

He predicts Canada won't demand much. He shrugs when a reporter says Canada insists it has many demands, including softwood lumber and expanding professional visas.

He replies: "That's not what this shows."