There are two conceptually distinct problems with standard estimates of the return to education (see here, here, and here for more).
Problem #1: Ability bias. People with traits the labor market values (intelligence, work ethic, conformity, etc.) tend to get more education. Since employers have some ability to detect these valued traits, people with more education would have earned above-average incomes even if their education were only average. Punchline: Standard estimates overstate the effect of education on worker productivity and income.
Problem #2: Signaling. People with traits the labor market values (intelligence, work ethic,
conformity, etc.) tend to get more education. Since employers have imperfect ability to
detect these valued traits, people with more education earn above-average incomes even if they personally lack these valued traits. Punchline: Standard estimates overstate the effect of education on worker productivity, but not the effect on income.
Neither of these stories enjoys much support from labor economists. They usually just ignore the signaling model – but when they’re being careful they’ll off-handedly admit that “Standard empirical tests can’t distinguish between the human capital and signaling hypotheses.” If you mention ability bias, however, labor economists will quickly point you to a massive literature that supposedly debunks it.
But if you pay close attention, there’s a bizarre omission. Despite their mighty debunking efforts, labor economists almost never test for ability bias in the most obvious way: Measure ability, then re-estimate the return to education after controlling for measured ability. For example, you could measure IQ, then estimate the return to education after controlling for IQ.
When I ask labor economists about their omission, they have a puzzling response: “IQ is a very incomplete measure of ability.” True enough. But the right lesson to draw is that controlling for IQ provides a lower bound for the severity of ability bias. After all, if the estimated return to education falls sharply after controlling for just one measure of ability, imagine how much it might fall after controlling for measures of all ability.
What happens to the return to education after controlling for IQ? I’ve done the statistics myself on the NLSY, and found that the estimated return to education falls by about 40%. I’ve talked to several other economists of widely varying political persuasions who reached very similar results. Only yesterday, though, did I discover an excellent publication that replicates this 40% figure – and shows it to be extremely robust: McKinley Blackburn and David Neumark’s “Are OLS Estimates of the Return to Education Biased Downward? Another Look” (Review of Economics and Statistics, 1995). Their conclusion:
Thus, in our NLSY data, OLS estimation of the standard log wage equation, including test scores, appears to provide an appropriate estimate of the return to schooling. Such estimates indicate an upward bias of roughly 40% in the usual OLS estimate of the return to schooling (that omits proxies for ability). In contrast to evidence from other recent research using different statistical experiments to purge schooling of its correlation with the wage equation error, our results show that one can address the issues of omitted-ability bias, measurement error, and endogeneity, and still conclude that OLS estimation omitting ability measures overstates the economic return to schooling.
Call me cynical, but I’m confident that if Blackburn and Neumark’s work had come out the other way, defenders of education would loudly include it on their list of reasons to ignore ability bias. Indeed, I wonder if their list would have grown half as long if the obvious test undermined education skepticism instead of supporting it.
To repeat: The straightforward way to test for ability bias is to measure ability, then control for it. If this approach failed to reveal ability bias, it would be reasonable to dismiss it. In practice, though, the straightforward test finds ability bias to be not merely real, but large. I’m not going to let anyone forget it. Expect me to invoke Blackburn-Neumark on a regular basis from now on.
READER COMMENTS
Ed Bosanquet
Jan 2 2012 at 1:58am
Bryan,
In a signal, you have two separate elements: data and noise.
An employer wants to know several things about a perspective employee: ability, diligence and existing acquired skills. Using an IQ test may reduce the bundled signal of a college degree. (Your post seems to indicate a reduction of 40%.) You seem to be claiming that an IQ test can be substituted for some of the information packaged in a college degree. If the IQ test can pushed up by training and it is used widely, then perspective employees could simply fake the IQ test.
I would assume faking the signal in an IQ test is easier then faking a degree and decent grades. So if you try to use the IQ test to solve the matching problem at first you would make gains since it is a cheaper more effective method of finding quality applicants. However, once it is used, the quality of the data in the signal would degrade.
After reading your posts about this topic for a while now, I’m still not sure how you improve the matching problem. Metrics often are useful until they start getting used to make decisions. Once they are used to make decisions, people optimize to the metric and metric loses value. A college education may have lost some of its value as a metric but I’m not convinced there is a better alternative.
Thank you,
Ed
Steve Miller
Jan 2 2012 at 7:02am
I haven’t looked at it in more than a month, but David Card acknowledges (and tries to explain) the divide between direct and indirect attempts to test for ability bias in his Handbook of Labor Economics chapter on the returns to education. He argues that the best available evidence comes from identical twin studies, and that standard OLS estimates of education’s returns are ability-biased upward by 10%. He also suggests that IV estimates of education’s returns probably suffer from greater ability bias.
My own hunch is that 40% may be high, but ability bias explains about a third of education’s OLS-estimated returns. It makes sense when you consider how asymmetric the returns to education are at different IQ quantiles. Heckman has been doing work on this with recent doctoral students at Chicago.
But let’s take Card’s view as the absolute truth, rather than a lower bound. 10 percent is still 10 percent. That suggests a good chunk of marginal-ability students are making a bad investment, and on the higher ed side many resources are being misallocated toward them.
Finally, Ed: you are missing the distinction between ability bias and signaling.
Daniel Kuehn
Jan 2 2012 at 8:42am
I think this is basically good, but you need to be careful with the ability scores in the NLSY.
First, two psychologists I know don’t think the ASVAB is very good. I’m sort of with you on this one. For our purposes, “not ideal” is still better than nothing.
More importantly, there’s good reason to believe this measure is endogenous itself. The ASVAB is more like the SAT or the GRE than an IQ test, in that it tests a lot of knowledge-based ability. Since it is administered when the NLSY sample is between the ages of 12 and 16, they’ve already had a lot of schooling (indeed, some of the 16 year olds may have already had ALL their schooling at that point). Your ability measure is probably also a measure of education.
I have more thoughts on this post here: http://factsandotherstubbornthings.blogspot.com/2012/01/ability-bias-and-education.html
I am sympathetic to what you’re saying. In one paper I have published and in one paper that I’m submitting to the Review of Black Political Economy this week or next we do exactly what you’re suggesting here. But I also think there are pretty good reasons why you don’t see this as much.
Jim
Jan 2 2012 at 8:43am
On the website of Economic Complexity Index, I notice that in the comparison of Thailand and Ghana, heavy investment in education does not stimulate economic growth while nurturing economic complexity does.
Analyzing current educational systems suffer from the loneliness of other options. I greatly fear that government regulation and total subjugation of supply and demand (inhibiting almost all innovative trial) means that our educational system is no longer guided by effectiveness in any way. The over-riding clue must be that its cost is still rising in a world where everything else is being delivered faster and more inexpensively.
Also, I find complexity and PSST have much in common, but perhaps that is because of my predilection to eschew macro altogether in preference to complexity and network theory.
Daniel Kuehn
Jan 2 2012 at 8:47am
In the Review of Black Political Economy paper we’re looking at the differential benefit of a high school diploma for white and black youth. You’ll be happy to know that we talk a lot about how are results are likely produced by the differential role of signaling for these youth. We take a lot of these insights from this recent RESTAT paper: http://www.mitpressjournals.org/doi/abs/10.1162/REST_a_00063
Peter
Jan 2 2012 at 11:18am
Hi, Interesting post.
IQ is affected by education as well (there’s recent work on this using data from Scandinavia and compulsory schooling laws); hence controlling for IQ might be controlling for a mediating variable.
Further, I don’t think a regression without using an IV or at least a structural model to account for selection is that believable. Even the latter methods have struggled to address this problem in terms of application…
Steve Miller
Jan 2 2012 at 11:24am
Okay, I spent the morning digging into the Blackburn-Neumark article. They now have me convinced that ability bias is even higher than I previously thought. They come up with a 40% upward ability bias for standard OLS estimates of education’s returns. They bend over backward to test the sensitivity of that estimate, and consider measurement error and half a dozen other potential objections. Their finding is very robust.
One reason is that the ASVAB measures multiple kinds of ability beyond IQ (which is measured in the subset known as the AQFT). If anything, it’s a more complete measure of ability than the SAT, GRE, etc. A basic IQ control measure of ability bias gives, as Bryan says, a *lower* bound for true ability bias in standard estimates. That’s a very impressive article. No wonder it hit ReStat.
Keith
Jan 2 2012 at 12:50pm
Personal story.
Father: Phd in management, professor, dean, consultant, board member and chairman for various firms, retired very successful with substantial assets.
Uncle (Father’s brother): married day after graduating high school, went to work at department store, mid thirties opened own clothing store, over time opened several others, sold stores and retired very successful with substantial assets.
Personal observation: success and income are predominately driven by ability and drive. Education level highly correlated to ability and drive but …
D
Jan 2 2012 at 6:40pm
Bryan is about to knock it out of the park again with this book.
Bostonian
Jan 2 2012 at 9:28pm
College to a large extent *is* an intelligence test. Since IQ tests measure intelligence with error, one could get at least a slightly better measure of someone’s intelligence using his college grades and IQ score than with the IQ score alone.
If you have large group of people who scored about 100 on an IQ test upon high school graduation, the people who went on to get a B.A. are on average more intelligent than the ones who flunked out of college.
In a multiple regression using an IQ score and years of school completed as predictors, some of the weight given to the schooling predictor is really due to the effects of intelligence.
Eric
Jan 3 2012 at 3:22pm
Mr. Caplan,
Though I am sympathetic to your general views on education there is an econometric fallacy hidden in your analysis in this post.
The fallacy is this: Suppose we are running OLS and are omitting a variable. If this variable is not orthogonal to other regressors then the OLS estimates will be biased.
Now suppose that we obtain a proxy for the missing variable and include it in our regression model. Do the new OLS coefficients on the variables of interest have to move closer to their “true” values? No. The reason for this is that the bias in a regression model is a ratio of two covariance terms and thus even though the proxy may plausibly reduce both the numerator and the denominator the ratio may increase in magnitude.
The main problem is that the denominator goes from being Variance(Schooling) to Variance(Schooling orthogonal to IQ). This second number might be very small–probably very few people with IQ’s under 85 get more than a high school diploma, so the small denominator can cause the bias term to explode.
Comments are closed.