He talks with Russ Roberts. It’s about how we should act when we don’t know what we are doing. It’s a great conversation. Note the case that Taleb makes for simple heuristics.
He talks with Russ Roberts. It’s about how we should act when we don’t know what we are doing. It’s a great conversation. Note the case that Taleb makes for simple heuristics.
Jan 19 2012
Frances Beinecke writes, Over the past several weeks, the Food and Drug Administration (FDA) has been in the news for its stance on antibiotic use in farm animals. Yet instead of making good on its 1977 promise to limit these drugs in livestock, the agency is moving in the opposite direction. The latest developments r...
Jan 19 2012
Paul Krugman writes, doesn't this [the Internet] allow a lot of really bad economics to circulate? Yes, but is it really any worse than it used to be? As I've tried to explain, the notion of journals as gatekeepers was largely fictional even 25 years ago. And I have a somewhat jaundiced view of how the whole refereein...
Jan 18 2012
He talks with Russ Roberts. It's about how we should act when we don't know what we are doing. It's a great conversation. Note the case that Taleb makes for simple heuristics.
READER COMMENTS
david
Jan 19 2012 at 1:26am
The advocacy of simple heuristics conflicts directly with the argument against engaging in policies that increase volatility – simple and conceptually appealing changes (the mass switch to floating exchange rates, for instance, at the end of Bretton Woods) can and did increase volatility. It’s not obvious that zero deficit/surplus in the long-run diminishes volatility, either (in fact it is very easy to sketch models where it increases volatility for both the state and the state+private economy).
Fundamentally you need some kind of model in mind to say interesting things about whether you are increasing or decreasing the impact of low-probability events. And then the heuristics rapidly stop being simple.
The most interesting part of the interview was the attempt to explain what creates “antifragility” or “negativa” but it is also very clear that Taleb is not speaking to any fellow student of complex systems.
rpl
Jan 19 2012 at 7:38am
I really don’t get Taleb’s appeal. His entire shtick seems to be spinning out elaborate, if somewhat strained, analogies between seemingly unrelated parts of the human experience. “Financial crises are like black swans, so you need to strengthen your strategy with a barbell — just like your skeleton!” They’re amusing, I guess, but also pretty useless because they can only “predict” things that we already know.
His last big insight, “Insulate yourself from tail risk” isn’t helpful because the number of different kinds of tail risk is limited only by your imagination. To try to insulate yourself from all of it would result in paralysis. Eventually you have to adopt some model that tells you what sorts of risks are worth paying attention to. This new antifragility concept seems equally useless. “Adopt a strategy that gets stronger when it is stressed — just don’t stress it past the breaking point!” Well, ok, but knowing for sure where that breaking point is and how to ensure that I never reach it is the real trick, isn’t it? If I know that, then I’m all set, and it doesn’t really matter what I do the rest of the time.
Taleb also seems less than honest about evaluating how his theories stack up against reality. For example, he continues to tout his barbell strategy, despite the fact that it performed pretty dismally when he tried putting it into practice at Empirica Kurtosis (which he dismisses as “just an experiment”). His M.O. seems to be to seize upon any apparent confirmation of his theories as vindication, while dismissing disconfirmatory evidence, saying that his theories “aren’t theories of everything,” or by ignoring them entirely.
I just don’t get why everyone takes this guy so seriously. Can someone who thought this was “a great conversation” (I found it painful to listen to) explain why? Is there anything useful to be extracted from his stories?
Adam
Jan 19 2012 at 9:33am
I’ve been told never to volunteer–but I’ll volunteer to respond to rpl. What’s interesting about Taleb’s argument? First, before Taleb, no one, especially economic no ones, paid any attention to fat tails. It was all Merton-Scholes and insurance, even after LCTM and 1998. Second, his critique of economic modeling and prediction is exactly right. It’s a Magister Ludi system corrupted by government subsidies and under-the-table payments (consulting, sinecures, monopolies, etc.) Step one–get rid of the Nobel Prize in economics. Third, antifragility may be a fruitful idea. I can see how dogged efficiency leads to fragile and risky structures. I understand how efficiency plays against redundancy.
Most of all, Taleb is interesting because he points out a problematic area that we all know is there, but haven’t quite been able to see as a distinct phenomenon. It’s like his color blue metaphor. The ancient Greeks were surrounded by blue–blue sky and blue water, yet they couldn’t see it. He says the ancient Greek language did not have a word for blue. Taleb poses words for what we have not recognized. He reminds us of fat tails and formulates a concept of antifragility. That’s intellectually interesting and promising. Maybe we can do something with those ideas.
rpl
Jan 19 2012 at 12:11pm
Adam,
I’m not saying that those observations aren’t true; I just question whether they’re as deep as Taleb’s admirers like to make them sound. For example, in order to “pay attention to fat tails” you have to know that they’re there and along what dimensions the tails are fat. (We like to visualize these things as one-dimensional distributions, where it’s obvious where the fat tails are, but in reality they’re many-dimensional, and it’s not so obvious.) If you want to pay attention to it in any meaningful way, you probably also have to know a little about what they actually look like. But if you know all of that, then by definition you are already paying attention to the tails, and Taleb isn’t telling you anything you don’t already know.
The same argument applies to the concept of antifragility. For example, I see that in another post today Arnold credits taking steps to prevent antibiotic resistance in bacteria as an example of Taleb’s thesis. The thing is, nothing in Taleb’s argument suggests ex ante that we should be worried about antibiotic resistance. We only know it’s a problem because people have studied bacteria and observed the development of resistance. At that point, Taleb’s observations become completely redundant, and in fact doctors and biologists have been urging steps to prevent resistance from developing for years, without benefit of Taleb’s theories. It’s just another example of Taleb (or, more correctly, one of his admirers) telling us what we already know and acting as though it’s a stunning revelation. Tell me how to construct antifragility in a system I didn’t even know was fragile, and then I’ll be impressed.
Eric
Jan 19 2012 at 3:42pm
Adam wrote:
Neologisms like ‘antifragility’ notwithstanding, This is just false. Read the Peso problem literature from the 1970’s, Mandelbrot’s work on fat tails in the 1960’s, Tom Rietz or Weitzman’s application of small probabilities to the equity premium . The option volatility smile really increased after the 1987 US stock market crash, highlighting the recognition of fat, non-Gaussian tails, as an equilibrium among market participants for over 25 years.
His animosity towards famous economists, theorists, executives, data miners, journalists, editors, politicians, etc. suggest he’s got real insecurity problems–regular people don’t care. He’s like a Keith Olbermann who writes books, without the wit.
His acolytes deserve him.
Cobb
Jan 23 2012 at 6:20pm
I can see how his line of thinking works in other areas of risk management besides economics. Much of what he says holds true for software coding as well, and it is in that regard that I find him interesting. I don’t know what level of genius anyone who writes a book on economics is supposed to have, but clearly if everyone knew as well as this crowd seems to know what is obvious about Taleb’s insights we wouldn’t have the Bob Rubin Problem or is all that baked into the equations as well?
The idea of doing nothing when you don’t know all of the risk is a very difficult proposition to socialize into any organization – especially when you are the expert. As a software consultant, customers tell me they want to buy more complexity than they use, as if functionality in applications were like cupholders in automobiles that don’t change the performance of the engine. I can see clearly how people don’t want to make the difficult choices of pursuing multiple strategies – because they want single source solutions.
We can’t necessarily quantify all the risks of added complexity when adding functionality to code – our markets are not stable enough – everybody wants the next product before the current one lives for 2 years, but this is what we get paid to produce. The latest isn’t always the greatest. Patching the new is not always better, but it’s always demanded.
Comments are closed.