Edward Leamer’s indictment of modern econometrics, “Let’s take the ‘con’ out of econometrics” is the best known critique of our habits as empirical economists but it has not been taken to heart by the profession.
There are a number of generic criticisms of regression methodology. As I recall Leamer’s book Specification Searches, he argues that while theoretical tests of statistical significance are based on a single confrontation between model and data, the practice of econometricians is to try a number of empirical specifications before reporting the one they like the best. This means that reported standard errors are too low, t-statistics are too high, and reports of statistical significance greatly exaggerated.
The rest of Russ’ post talks a lot about what I think of as confirmation bias. Basically, when we like a result, we give the methodology much less scrutiny than when we dislike it.I think that good economists rarely change their minds about a topic based on a regression result. That observation, if true, suggests that something is wrong. Part of what is wrong is that we have confirmation bias, so even when we should change our minds we come up with excuses not to believe contrary results. But an even bigger part of what is wrong is that regression results are indeed quite fragile, so that one should not give them much weight.
For example, on the question of whether the death penalty deters murder, I would never trust a regression result. There are too many choices for specifying the causal variable. Is it a dummy variable set equal to 1 if the crime falls under the death penalty statute in the state where it is committed? It is a probability of being executed, based on some measure of past performance of the justice system? Is it some measure of what potential murderers think is the probability of being executed? Moreover, there are way too many important potential control variables. Etc.
My personal opinion, which is unlikely to be swayed much either direction by econometric results, is that the death penalty is unlikely to have much effect on murder. My guess is that most murders are committed under circumstance in which the killer is neither in the mood or in possession of the data to incorporate potential death penalty punishment.
On the other hand, society would benefit from making other crimes capital offenses. When I was in graduate school in Cambridge (Ma.), double-parking was rampant. I wanted to get around on my bike, and urban streets are hazardous enough as it is. Double-parked cars are a nightmare. I was a teaching assistant for Ec 10 at Harvard, and when it came time to teach the economics of crime, I advocated the death penalty for double-parking.
I am pretty sure that the death penalty would work in that case, because double-parking is not a crime undertaken by someone whose thought processes have shut down. Double-parkers are making rational calculations, so that the death penalty should have huge deterrent effects.
So from a deterrence standpoint, we should make a capital crime out of double-parking, not murder.
As far as crimes go, econometrics falls somewhere in between double-parking and murder. I’m not sure about the right level to set the deterrent, but it needs to be higher than what we have today.
READER COMMENTS
spencer
Oct 22 2007 at 4:07pm
Isn’t it sort of like Russ being in favor of carrying a concealed weapon.
But if the weapon is concealed and no one knows you have it how is it a deterrent?
Wouldn’t a gun carried in full view provide a stronger deterrent?
Marcus
Oct 22 2007 at 4:10pm
Is it possible to set up some sort of ‘double blind’ system of analyzing statistics?
I’m just a casual observer but what if the analyzer requested a collection of statistics to analyze but instead of one collection he received several collections not knowing which one was the true set of statistics?
Just a passing thought.
Bruce G Charlton
Oct 22 2007 at 4:30pm
I too have a low opinion of multiple regression analyses – because it fails to control (eg. by stratified sampling) and instead substitutes post-hoc modelling of (assumed) linear relations.
MR uses the data set being analyzed to generate multiple hypothetical relationships between many variables, then uses these assumed relationships to attempt to remove the effect of confounding variables. This is fundamentally un-scientific IMO, since it involves multiple hypothesis testing on the same dataset.
I feel that MR is usually evidence that the person has not really engaged with the problem at hand – that they are a technician rather than a scientist.
But if you want to see some statistical analysis which strongly suggests that capital punishment probably *does* deter murder, then look at the work of Engram, who is somebody who understands numerical analysis inside and out.
Engram keeps it as simple enough to understand the assumptions, he is guided by hypotheses, and he understands the need for control and for cylcles of hypotheses and testing on new data sets:
http://engram-backtalk.blogspot.com/2007/01/prior-posts-on-capital-punishment
Tom S.
Oct 22 2007 at 4:49pm
@Bruce
The link is broken, but it seems you are attacking models which are misspecified in the first place (assuming linearity) as an short coming on all regression models as opposed to appropriate models.
Alex J.
Oct 22 2007 at 4:55pm
If all guns were openly carried, a potential criminal could see when all victims/witnesses were unarmed. If many guns are carried concealed, the potential criminal faces a statistical chance that any person is armed, even though he can’t see any weapons.
ZAG
Oct 22 2007 at 4:56pm
What econometricians fail to realize is that NO STATISTICAL test measures CAUSATION. There is association, but not causation.
REPEAT STATISTICAL TESTS DO NOT SHOW CAUSATION. THEY SHOW ASSOCIATION. THAT’S IT! NOTHING MORE.
You find that out in an elementary statistics course.
Tom S.
Oct 22 2007 at 5:39pm
@ZAG-
A bit of pragmatism, eh? A panel set regression regression over an event (e.g., legislation) can establish a reason to believe in causation. Or one could adapt Hume’s outlook: causation can never be established at all. But as Hume went on to say, there is nothing erroneous about the pursuit of establish causation.
Jack
Oct 22 2007 at 7:12pm
Arnold,
Re: “I think that good economists rarely change their minds about a topic based on a regression result,” didn’t Larry Summers once argue something like “no one changes their mind as a result of an econometric result”? This further supports your point.
Zag,
Yes, precisely, and that’s why a large chunk of econometrics is about how to establish causality.
Russ (OK he didn’t write here),
My understanding is that such papers are really about “my instruments are better than yours” and people like James Heckman (see Minn Fed interview) come down very harshly on such work because the instruments are inevitably weak.
Brian Goff
Oct 23 2007 at 9:24am
Leamer’s piece was hot stuff at Mason back in the early to mid 1980s. I’m not trying to be snotty, but the practice of specification searches is well known (maybe then, certainly now). Referees of stat papers, such as myself, take this into account in ways that do help to offset the effects of practice and sort out quite a bit of the most egregious cherry picking.
No doubt, the confirmation bias is at work in regression analysis, but this is really true of empirical work (even lab work). Negative results don’t get published much, partly because of the ease of producing negative results and partly because research gravy trains (often grant funded) get going (such as global warming studies.) A variety of disciplines have not really found a suitable solution.
Niclas Berggren
Oct 23 2007 at 9:50am
On confirmation bias (or the like), see Timothy D. Wilson et al. (1993), “Scientists’ Evaluations of Research: The Biasing Effect of the Importance of the Topic,” Psychological Science 4(5): 322-325 and Stephen I. Abramowitz and Beverly Gomes (1975), “Publish or Politic: Referee Bias in Manuscript Review,” Journal of Applied Social Psychology 5(3): 187-200.
Michael Sullivan
Oct 23 2007 at 12:41pm
Wait a minute. This is really basic statistics. When you back test a big data mine for possible correlations, you have to pretty much throw those results out except as a guide to what to test with new, previously unknown data sets. If it’s still significant on fresh data, then you have something. Normally publishable levels of significance are essentially meaningless when arrived at by back testing or data mining.
There are people publishing papers with results based solely on regressions with decisions informed by data mining that don’t explicitly retest with fresh data once they’ve come up with a proposition? And economics researchers take these people seriously? That’s the kind of reasooning that ought to fail undergrad stats, not get into peer reviewed journals. What in the world?
Jack
Oct 23 2007 at 12:56pm
Michael,
Of course economists are not that clueless. A researcher will develop a theory based on stylized facts before even looking at the data, and only once the model is developed will it be tested using data.
There is rarely fresh data available in non-experimental settings, but if there are, then the old data may be used to design the model (i.e. start by replicating old studies), and then the last step is evaluating the model using the fresh data.
Do some researchers cheat? No doubt.
EclectEcon
Nov 20 2007 at 6:04am
Capital punishment for double-parking? What about marginal deterrence?
Aren’t you concerned that double-parkers would then have more incentive to kill those who might turn them in?
John S.
Nov 20 2007 at 7:41am
My guess is that most murders are committed under circumstance in which the killer is neither in the mood or in possession of the data to incorporate potential death penalty punishment.
An alternate theory is that people fall into two broad categories: those who are predisposed to commit murder, and those who are not. Someone who does it once will likely do it again, and thus every execution prevents several future murders by the same person.
Comments are closed.