By David Henderson
Joe Nocera has a nice piece on risk management in last Sunday’s New York Times magazine. A great line:
“The old adage, ‘garbage in, garbage out’ certainly applies,” Groz said. “When you realize that VaR is using tame historical data to model a wildly different environment, the total losses of Bear Stearns’ hedge funds become easier to understand. It’s like the historic data only has rainstorms and then a tornado hits.”
The article quotes Nassim Nicholas Taleb, author of The Black Swan, at length. Interesting and well-done. Great paragraph:
Eventually, though, you do start to get the point. Taleb says that Wall Street risk models, no matter how mathematically sophisticated, are bogus; indeed, he is the leader of the camp that believes that risk models have done far more harm than good. And the essential reason for this is that the greatest risks are never the ones you can see and measure, but the ones you can’t see and therefore can never measure. The ones that seem so far outside the boundary of normal probability that you can’t imagine they could happen in your lifetime — even though, of course, they do happen, more often than you care to realize. Devastating hurricanes happen. Earthquakes happen. And once in a great while, huge financial catastrophes happen. Catastrophes that risk models somehow always manage to miss.
I agree with all the above.
Taleb, though, as is typical, overstates:
And that’s the point. Because we don’t know what a black swan might look like or when it might appear and therefore don’t plan for it, it will always get us in the end. “Any system susceptible to a black swan will eventually blow up,” Taleb says.
But precisely because it’s like a black swan, that is, a rare event, it won’t always get us. He’s literally correct that any system susceptible to a black swan will eventually blow up, but the truth hinges on the word “eventually.” What if “eventually” means, “in one hundred years?”
HT to Jeff Hummel.