Christopher Avery, Mark Glickman, Caroline Hoxby, and Andrew Metrick use a chess-rating methodology to rank colleges. If a student admitted to both Harvard and Yale selects Harvard, then that is a “win” for Harvard. The schools that rank at the top based on this approach are Harvard, Yale, Stanford, Cal Tech, and MIT. The authors conclude:

If a revealed preference ranking constructed using our procedure were used in place of manipulable indicators like the crude admissions rate and crude matriculation rate, much of the pressure on colleges to manipulate admissions would be relieved. In addition, students and parents would be informed by significantly more accurate measures of revealed preference. We close by reminding readers that measures of revealed preference are just that: measures of desirability based on students and families making college choices. They do not necessarily correspond to educational quality.

Note that if students were to make choices based on this sort of ranking system, then rankings would be self-perpetuating. The best students would see that Harvard is the highest-rated school, and they would pick Harvard. This would lead to Harvard being the highest-rated school. It becomes perfectly circular.

I like the chess-rating approach. However, I would prefer to see it applied to student outcomes (i.e., performance on the same introductory economics exam) rather than students’ choice of school.

For Discussion. Will we ever reach a point where college quality is measured in terms of value added?