In a recent episode of EconTalk, Russ Roberts interviewed Stanford University’s John Ioannidis to discuss his 2017 publication “The Power of Bias in Economics Research.” Ioannidis and his co-authors found that the vast majority of estimates of economic parameters were from studies that were “underpowered,” and this, in turn, meant that the published estimates of the magnitude of the effects were often biased upward.
Unfortunately, many economists (including me) have little training in the concept of “statistical power” and might be unable to grasp the significance of Ioannidis’ discussion. In this article, I give a primer on statistical power and bias that will help the reader appreciate Ioannidis et al.’s shocking results: After reviewing meta-analyses of more than 6,700 empirical studies, they concluded that most studies, by their very design, would often fail to detect the economic relationship under study. Perhaps worse, these “underpowered” studies also provided estimates of the economic parameters that were highly inflated, typically by 100% but in one third of the cases, by 300% or more.
Economists should familiarize themselves with the concept of statistical power to better appreciate the possible pitfalls of existing empirical work and to produce more-accurate research in the future.
This is from Robert P. Murphy, “Economists Should Be More Careful With Their Statistics,” Econlib Feature Article, April 2, 2018.
Another highlight:
Furthermore, the underpowered studies also implied very large biases in estimates of the magnitude of economic parameters. For example, of 39 separate estimates of the monetary value of a “statistical life”–a concept used in cost/benefit analyses of regulations–29 (74%) of the estimates were underpowered. For the 10 studies that had adequate power, the estimate of the value of a statistical life was $1.47 million, but the 39 studies collectively gave a mean estimate of $9.5 million. After our hypothetical examples of coin-flipping researchers, this real-world example leads one to suspect that the figure of $9.5 million is likely to be vastly exaggerated.
Until reading this, I had had no idea that the $5 to $9 million number I had been using in cost/benefit analysis was likely way too high.
Read the whole thing.
READER COMMENTS
Khodge
Apr 3 2018 at 2:46am
There is a certain “correctness” to this; the greater the magnitude of a study the higher its significance. This is true in any field using statistics and falls under confirmation bias rather than presenting intentionally misleading data.
As a subject draws more attention, the marginal effectiveness of any study in any field should fall.
Jon Murphy
Apr 3 2018 at 8:21am
An excellent article everyone should read.
My roommate is a statistician. When we first started living together and I was taking my econometrics course, he made sure I understood what, exactly, statistical power is and what, exactly, hypothesis testing is (and is not!). They’re valuable lessons that have served me well. These are important concepts that economists (including myself) often don’t understand well.
Jacob Egner
Apr 3 2018 at 12:56pm
Hmmm, the replication crisis (or at least a major recognizing that much prior work is built upon noise) might be soon coming to economics.
I’m guessing you mean “2018” and not “2108”.
Airman Spry Shark
Apr 3 2018 at 7:53pm
For a (slightly) more technical primer on the issue, see Andrew Gelman’s The statistical significance filter.
David R Henderson
Apr 3 2018 at 10:26pm
Thanks, Jacob Egner.
Comments are closed.