At the end of every semester, GMU students evaluate their courses on a scale of 1-5. As I’ve discussed before, 5 (“excellent”) is the standard response. So I was shocked this morning to see that students at GMU have suddenly reduced their average “overall rating of this course.” The university average used to be about 4.33, with a median of 5; now it is 4.13, with a median of 4. Since there are 60-75,000 evaluations handed in per semester, the change is way too large to attribute to chance.

We could spin all kinds of theories to explain the change. But the simplest explanation turns out to be a question-wording effect. The evaluation form recently raised the number of questions from six to sixteen. While the final question before and after asked students for their “overall rating of this course,” the new form added a question right before about the “overall rating of the teaching.”

Funny thing: If you look at the average overall teaching ranking (4.34), it is almost exactly equal to the old overall course evaluation (4.33).

Here’s how things look to me:

Students want to give their professors an overall thumbs up. They want to say that the average prof is above average. But in the old days, their only way to do so was to say the course was great. Now that they can distinguish between the quality of the teaching and the quality of the course, they say that the teaching was great.

I’ve often argued with Arnold about the usefulness of survey research. Even though I think Arnold’s too pessimistic, GMU student evaluations do highlight an important way that surveys come up short. If you really want know how students rate a teacher or a course, the best measure is probably just attendance.