Tyler Cowen rejoins the debate on the returns-to-schooling literature.
What’s striking about the work surveyed by Card is how many different methods are used and how consistent their results are. You can knock down any one of them (“are identical twins really identical?, etc.), but at the end of the day which are the pieces — using natural or field experiments — standing on the other side of the scale?
My guess is that “the cutting-room floor” is the answer to the question. The first time you run a regression, you usually get coefficients that you don’t like. You then proceed to “fix” the problem by correcting errors in the data, improving the specification, etc.
Next, suppose that you stick with the result that there is a small effect of schooling on earnings. Are you able to get it published, or does the journal editor leave it on the cutting-room floor?
Finally, suppose, like James Heckman, you are able to publish a result showing no large effect from schooling. Does Card include it in his survey, or does he leave it on the cutting-room floor?
Note that in Card’s survey, he points out that Angrist and Krueger appear to have backed off their use of the draft lottery as a natural experiment, because
In fact, the differences in education across groups of men with different lottery numbers are not statistically significant. Thus, the IV estimates are subject to the weak instruments critique of Bound et al. (1995), and are essentially uninformative about
the causal effect of education.
I would add that it would have been interesting to do the same study using women, who of course were not eligible for the draft. If you find that using birth date draft number for women produces the same effect as for men, you know that something is fishy.
My guess is that if you add up the cost of doing all of the studies in the education/earnings literature, it is far more than the cost would have been of undertaking a college scholarship lottery among a sample of students and observing the differences in outcomes. To study the relevant margin, choose the sample of students from those whose grades and SAT scores would barely gain them admission to second-tier state schools. Randomly give half of them full-ride scholarships and give half of them nothing. Observe their years of schooling and their subsequent earnings. Base your conclusion on those results.
READER COMMENTS
stephen
Jun 28 2011 at 9:03am
question:
When Tyler refers to “different methods” does he mean different statistical methods/models on the same data, or same methods/models on independent sets of data? Or, something else completely different?
Eric
Jun 28 2011 at 9:37am
I agree wholeheartedly with the critique you raised. It applies generally to nearly every area of the economics literature that relies heavily on econometrics. As a trained physicist who now works in a department with economists, it’s surprising to me what a physicist sees that economists are blind to.
In addition to the publication (and pre-publication) bias listed above there is also the economic vs. statistical significance issue that McCloskey has been pushing forever. And Scott Armstrong’s critique that econometric specifications have very little predictive value on new datasets that weren’t available for the original study.
Physics has the same problems, particularly that of continuing to correct errors until the result seems reasonable and then stopping, but these errors are mitigated somewhat by the fact that physics can do controlled experiments and even though there are measurement errors you can get p-values like 10^-130 (i.e. won’t happen by chance in the history of the universe) and you can see the results with your eyes. It’s also mitigated by the fact that replication in multiple labs multiple independent times (meaning completely independent “data sets”) is required before a result is accepted as probably true. I guess my problem isn’t that published econometric studies aren’t evidence but that, given the issues, they are *very* weak evidence and economists seem to be 95% certain of things that they should be 52% certain of. Like Cowen here being way too certain that the conclusions of the literature are true.
B
Jun 28 2011 at 10:37am
Econometrics done correctly is boring, extremely lengthy and mostly inconclusive. That doesn’t mean the results aren’t informative if your audience has time to hear the whole story. But you need to do a lot of cutting to tell a concise, exciting story.
AS
Jun 28 2011 at 1:17pm
Your criticisms are good, but they apply to pretty much all economic literature that relies on econometric methods. If you are going to use that argument to discount studies that show a real effect from education, you should use it to discount studies in every other subfield of economics with similar methods and results. And if you aren’t willing to throw out all those other subfields, then you should be willing to (at least partially) accept the results of this literature.
ThomasL
Jun 28 2011 at 1:39pm
It applies to more than econometrics and sociology. There are important parallels in medicine and climate as well.
“Look at how consistent their results are” was also a main rallying point around climate change research. No matter what you believe regarding global warming, the Climategate scandal showed that the consistency of their results was not purely coincidental. There was a lot going on in data and regression parameter selection to make sure the results produced by each member of the “Hockey Team” (their own phrase) were always on message.
If you are concerned with advocacy, that makes sense, because too subtle a message is bad news copy. If you are concerned with truth, it is quite another story.
And that is only intentional bias… the effects of unintentional bias can be even greater.
Simon
Jun 28 2011 at 2:06pm
Here is one paper testing signaling vs. productivity that finds no return to high school graduation, and isn’t on the cutting room floor:
http://www.irs.princeton.edu/pubs/pdfs/557.pdf
This is basically an ideal test of the signaling hypothesis — it compares students who take exit exams to earn a high school diploma and barely pass vs. barely fail. It finds no effect of receipt of a high school diploma conditional on academic skills. The signaling model would predict a very different result. What is your explanation?
Arnold Kling
Jun 28 2011 at 4:20pm
AS,
I am willing to throw out a lot of econometric results. I am a broad-based skeptic, and I try to be just as doubtful of studies that support my own biases.
Comments are closed.