Saturday, September 13, 2014

"U.S. Intelligence Community Explores More Rigorous Ways to Forecast Events"

From the Wall Street Journal:

New Approaches to Prediction Emphasize Data-Gathering and Crowdsourcing Over Individual Deliberation 
Analysts for the Central Intelligence Agency, the National Security Agency and more than a dozen other government organizations depend on their ability to forecast national and global events to help ward off various threats to the country, but old-style approaches can produce flawed results.

To improve quality, the government has taken the unusual step of running tournaments that invite people outside the intelligence community to develop better ways to forecast world events, and several have produced notable results.

"Traditional forecasting in the intelligence community relied on human judgment, and the way in which humans make those judgments has tended to be unstructured deliberation," said Jason G. Matheny, a project manager for IARPA, the Intelligence Advanced Research Projects Activity, which is the research and development arm of the Office of the Director of National Intelligence. 

Deliberation is useful, but it isn't ideal for generating accurate forecasts: It is susceptible to groupthink. Social biases, such as deferring to those with seniority, intrude on the process. And dissenting views often aren't captured. The effects have led analysts to predict events that didn't occur, or miss events that did take place. "Pearl Harbor would be a failure to warn," Mr. Matheny said. 

Other examples that took the U.S. by surprise, he said, include the Suez crisis of the 1950s, the fall of the Shah of Iran in the 1970s and, more recently, the al Qaeda attacks of Sept, 11, 2001.
Among the forecasting tournaments run by IARPA are Aggregative Contingent Estimation (ACE), Forecasting Science and Technology (ForeST) and Open Source Indicators (OSI).

Each seeks to improve the precision and timeliness of intelligence forecasts by using techniques such as crowdsourcing, probability scores and machine learning—a field of computer science that "teaches" computers to recognize patterns in data. The strategies offset the weaknesses of any single source and overcome human tendencies to over- or underestimate the possibility of an event.
ACE launched first, in 2011, with five teams competing, but the Good Judgment Project, led by Philip Tetlock of the University of Pennsylvania, beat out its rivals and now is the only team that IARPA funds. 

The project uses thousands of amateur forecasters to answer hundreds of specific questions regarding world affairs. The competitors' goal was to beat a control group by 50% according to Brier scores, a measure of the accuracy of probabilistic predictions, by the third year of the four-year competition. "This team beat that metric by 70%, and they did it in year two," said Steven D. Rieber, who manages the project for IARPA....MORE
See also: 
We have many, many posts on the subject, if interested use the search blog box with the keyword forecasting.
Or, if that's not exactly the direction you want to go you may wish to consider:
"Pseudo-Mathematics and Financial Charlatanism...."

And previously on the Mountebank channel:
UPDATED--Are You a Recent Graduate Who Hasn't Found a Job? Consider Becoming a Charlatan
Follow-up: Choosing the Charlatan Career Path
Re-post: Peak Oil Stalwart to Shutter Forum/News Site, Persue Career as Astrologer
See also:
Technical analysis
Fundamental analysis
Divination for Dummies
Pitfalls in Prognostication: Fortune Magazine's August, 2000 "Ten Stocks to Last the Decade"
And many more.