Want fairer course evaluations for female instructors? Write better questions.
It’s the end of the semester, and students’ course evaluations are trickling in. More often than not, students fill out evaluations begrudgingly, only after being urged by their professors, who understand that evaluations can be the deciding factor when it comes to tenure, awards, and salary negotiations.
Course evaluations are the bedrock of any college or university that aspires to excellence. For one, they give students agency and involve them as stakeholders in the future of their education system. And in an ideal world, course evaluations would be just that: an impartial assessment of excellence.
Recent research has shown that female instructors consistently receive lower scores on course evaluations. And before anyone tries to tell me that women are inherently less [pick your poison: assertive, organized, competent, etc.], the study had students rate the quality of assistant instructors in an online course, where the students never met the instructor in person but assumed their gender based on their name and pronouns. In the study, the same instructor taught two different classes under two different apparent genders. Meaning that the same person was scored higher when they were believed to be a man, and lower when they were believed to be a woman. Other studies have shown similar results for racialized instructors.
So what’s a college or university to do?
Doing away with course evaluations altogether sounds like throwing out the baby with the bathwater – course evaluations are efficient, cost effective, and accessible for students. The solution, perhaps, is to structure course assessments to help students avoid their own biases. Universities and colleges can do that by being thoughtful about what questions they ask students on course assessments, and how those questions are worded.
As it turns out, the course evaluation favourites – like having students rate how strongly they agree with the statement “overall, this person is an effective teacher” – are the ones to watch out for.
In Inside Higher Ed, Joey Sprague argues that questions that ask students to assess an instructor’s entire performance are the most likely to activate bias. They’re vague enough that they encourage an instinctive response – instinct that is fertilized by unconscious bias – rather than forcing students to reflect on the particular behaviours and qualities that make an instructor effective.
Questions that are unspecific about what metric the student should use to assess a teaching quality also become hotbeds of bias. “The standards that people apply shift depending on the target’s gender and race,” explains Sprague, meaning that when answering questions about how “responsive” a professor is, students hold men and women to different standards of responsiveness.
The bottom line?
Universities need to design course evaluation questions that measure concrete behaviours. That means specifying a time frame (“This instructor returned graded assignments within two weeks”) and encouraging students to reflect on particular interactions with their instructor that inform their assessment (“This instructor was always in their office during their office hours”).
The outcome is students who receive a better education, and faculty that is recognized fairly for that excellence.
Looking for a painless way to automate course evaluations? Fiscot can help.
Our S.O.F.I.A. software includes a built-in student dashboard where students can track their courses, instructors, and evaluations of their performance.
Image source: Wake Forest University School of Law – Under Creative Commons license