At the end of every semester, GMU students evaluate their courses on a scale of 1-5. As I’ve discussed before, 5 (“excellent”) is the standard response. So I was shocked this morning to see that students at GMU have suddenly reduced their average “overall rating of this course.” The university average used to be about 4.33, with a median of 5; now it is 4.13, with a median of 4. Since there are 60-75,000 evaluations handed in per semester, the change is way too large to attribute to chance.
We could spin all kinds of theories to explain the change. But the simplest explanation turns out to be a question-wording effect. The evaluation form recently raised the number of questions from six to sixteen. While the final question before and after asked students for their “overall rating of this course,” the new form added a question right before about the “overall rating of the teaching.”
Funny thing: If you look at the average overall teaching ranking (4.34), it is almost exactly equal to the old overall course evaluation (4.33).
Here’s how things look to me:
Students want to give their professors an overall thumbs up. They want to say that the average prof is above average. But in the old days, their only way to do so was to say the course was great. Now that they can distinguish between the quality of the teaching and the quality of the course, they say that the teaching was great.
I’ve often argued with Arnold about the usefulness of survey research. Even though I think Arnold’s too pessimistic, GMU student evaluations do highlight an important way that surveys come up short. If you really want know how students rate a teacher or a course, the best measure is probably just attendance.
READER COMMENTS
David Robinson
Feb 5 2008 at 2:22pm
The problem with measuring popularity by attendance is controlling for the time of day. Early morning classes are, in my experience, usually less attended than later classes. My favorite class last semester was the one I attended the least, because it was the earliest in the day.
But that’s just one problem. Putting course lecture slides, notes or video online will drive down attendance, even though adding those features will probably make the course MORE popular. And classes have ways to ensure attendance- they can have random quizzes occasionally or even every lecture. Difficult classes will force students to attend more often, even if they like the class less because of the difficulty.
The fact that it’s a poor indicator is nothing compared to the distortionary effect it would have if it were used as an evaluation metric. Professors would be encouraged not to put up notes, slides or videos, to add quizzes etc., and to take other such measures. It would be a good idea if raising attendance were your goal, but it’s not- you’re citing it as an indicator.
Dr. T
Feb 5 2008 at 8:23pm
My student evaluation experience mostly comes from teaching medical school. In medical school, student evaluations of faculty correlate strongly with two factors: how entertaining was the lecturer and (most importantly) how easy were the exam questions. Studies of medical student evaluations of courses/faculty showed a statistically significant negative correlation with performance on the National Board exams. That is, the students did poorer in areas taught by higher rated faculty and better in areas taught by lower rated faculty. I am proud to say that I had the second lowest evaluation score, but, on my area of the National Board exam, the students beat the national average (which they did in only one other subject). Most College of Medicine deans recognize the poor correlation, and faculty who challenged the students and got good results weren’t penalized for the poor evaluations.
With this background, I have difficulty understanding why student evaluations are so important in economics courses.
burger flipper
Feb 5 2008 at 9:35pm
Also of interest are sources where students are informing other students and not the institution, such as ratemyprofessor.com. There, Caplan does fine and Kling suffers, maybe because of tough grading (reported by pickaprof, where neither of you are reviewed).
The most salient point from Supercrunchers for me was the poll results where 96% of professors assessed themselves as above average. Seems about right.
Niccolo Adami
Feb 6 2008 at 12:40am
In my experience, as a student, I find a difference between class enjoyability and class quality. Whereas a class must be enjoyable to possess a high level of quality – learning value – it doesn’t need a high level of quality to be enjoyable.
I don’t know if that’s taken into account though.
Also, maybe the best indication of quality would be class interaction. I don’t know if that’s measurable or even quantifiable, but I think it goes without saying that people tend to interact with that which they find worth their while. If you could measure that, I think you would find it to be a better measurement.
As for the results themselves, my hunch is that it might even be a difference in the students. Maybe this years students are less receptive than last years students? If all other elements remain the same, what else could it be?
This seems like a stretch, however, to suggest that so many students would enter GMU with a poorer attitude.
Michael Sullivan
Feb 6 2008 at 11:28am
I like your suggestion of 2005 to add specific superlatives to a 6 and 7. Also, I’d like to recommend something that was done when I was at the University of Rochester in the late 80s, which is to publish not just the average but the raw data. So I could tell if a professor’s 4.5 was due to a large number of people giving 3s and 4s vs. one or two 1s against everybody else as a 5. They also published a few representative comments on each course/prof, and I found those far more enlightening than the ratings.
It would be even more important to publish the raw data with your suggestion. Seeing that a prof got lots of 6s and a few 7s along with some 1-2s would make me far more eager to take their class a similar average but no outliers.
Comments are closed.