Sunday, June 9, 2013

Why Economic Models Are Always Wrong

From Scientific American:
 Financial-risk models got us in trouble before the 2008 crash, and they're almost sure to get us in trouble again

When it comes to assigning blame for the current economic doldrums, the quants who build the complicated mathematic financial risk models, and the traders who rely on them, deserve their share of the blame. [See “A Formula For Economic Calamity” in the November 2011 issue]. But what if there were a way to come up with simpler models that perfectly reflected reality? And what if we had perfect financial data to plug into them?

Incredibly, even under those utterly unrealizable conditions, we'd still get bad predictions from models.
The reason is that current methods used to “calibrate” models often render them inaccurate.

That's what Jonathan Carter stumbled on in his study of geophysical models. Carter wanted to observe what happens to models when they're slightly flawed--that is, when they don't get the physics just right. But doing so required having a perfect model to establish a baseline. So Carter set up a model that described the conditions of a hypothetical oil field, and simply declared the model to perfectly represent what would happen in that field--since the field was hypothetical, he could take the physics to be whatever the model said it was. Then he had his perfect model generate three years of data of what would happen. This data then represented perfect data. So far so good.

The next step was "calibrating" the model. Almost all models have parameters that have to be adjusted to make a model applicable to the specific conditions to which it's being applied--the spring constant in Hooke's law, for example, or the resistance in an electrical circuit. Calibrating a complex model for which parameters can't be directly measured usually involves taking historical data, and, enlisting various computational techniques, adjusting the parameters so that the model would have "predicted" that historical data. At that point the model is considered calibrated, and should predict in theory what will happen going forward.

Carter had initially used arbitrary parameters in his perfect model to generate perfect data, but now, in order to assess his model in a realistic way, he threw those parameters out and used standard calibration techniques to match his perfect model to his perfect data. It was supposed to be a formality--he assumed, reasonably, that the process would simply produce the same parameters that had been used to produce the data in the first place. But it didn't. It turned out that there were many different sets of parameters that seemed to fit the historical data. And that made sense, he realized--given a mathematical expression with many terms and parameters in it, and thus many different ways to add up to the same single result, you'd expect there to be different ways to tweak the parameters so that they can produce similar sets of data over some limited time period.

The problem, of course, is that while these different versions of the model might all match the historical data, they would in general generate different predictions going forward--and sure enough, his calibrated model produced terrible predictions compared to the "reality" originally generated by the perfect model...MORE
HT: vbounded
We have a couple hundred posts on models, climate, financial, economic, high fashion.
The lessons of all of those electrons are:

"All models are wrong, but some are useful"
-George E.P. Box
Section heading, page 2 of Box's paper, "Robustness in the Strategy of Scientific Model Building"
(May 1979)
 
and:
“It would appear the ability to see risk is inversely proportional to the time spent trying to model it.”
An instant banking classic, from Treasury Select Committee hearing on HBOS (Richard Smith): https://twitter.com/creditplumber/status/275673495713751040: