“In teaching, you cannot see the fruit of a day’s work. It is invisible and remains so, maybe for twenty years.” ~ Jacques Barzun.
Athabasca University, like a number of other universities, uses student opinion surveys to evaluate teaching competence. All academics are vulnerable to these kinds of evaluations but those who are part-time and/or on contract are especially vulnerable as their livelihood basically depends on getting good results on these instruments. Those on contract usually do not have the luxury of also being evaluated based on their research or community service.
But, considering that student opinion surveys are being widely used in various academic institutions which are supposedly bastions of academic research, it is noteworthy that there is a distinct lack of research supporting the validity of their use. Thus, rather than advancing research-based argument to support their use, their proponents tend to put forward mere assertions or else simply appeal to naked pragmatism, e.g., “they keep the students happy”, “it’s easy to crunch the results”.
In fact, the vast majority of the research that has been done on student opinion surveys opposes rather than supports their use. See, for example, Braga et al’s article in Economics of Education Review, August 2014 (http://www.sciencedirect.com/science/article/pii/S0272775714000417). Based on the findings of such studies, numerous articles have been written critiquing their use. Some of the main arguments raised against using student opinion surveys to evaluate teaching include:
The surveys focus on student emotional disposition toward the teaching in the short term, long before the full impact of the teaching is actually known;
using a single method to evaluate teaching directly contradicting the well-established principle that good evaluation requires a number of methods;
teaching effectiveness cannot be atomized into a checklist of specific behaviours;
the response rate is low and even lower with online evaluations, skewing the validity of the results;
in general, only very happy or very unhappy students are motivated to fill out the surveys;
certain items that are commonly evaluated simply cannot be evaluated by students, e.g., the instructor’s course content knowledge;
“averaging” the results is statistical nonsense because the choices (1, 2, 3, 4; not satisfied, somewhat satisfied, satisfied, fully satisfied ) are ordinal (counting) in nature rather than interval in nature (an equal distance apart);
results are influenced by variations in the mere quantity of student-instructor interaction;
the very desirable teaching practice of challenging students may lead to lower evaluations;
gender bias exists against female instructors;
pressure is placed on instructors to do what is required to get a “good rating”;
the surveys provide few useful insights to experienced instructors;
there is no accountability for personal or even slanderous remarks;
in the end, the opinion surveys are nothing more than a popularity contest.
The above criticisms (and others) also highlight the gross unfairness of using the evaluations for what might be called disciplinary purposes. If the opinion surveys themselves are flawed, then they cannot be validly used to assess teaching competence. Further, how can that flawed assessment of competence be then validly used to “inform” such matters as which contract academic gets a letter of reprimand or is assigned to teach or reteach a particular course? There is a saying in law about the “fruit of the poisoned tree”. It refers to the fact that if the evidence is tainted, then anything gained from it is tainted also. The same principle should apply in regard to any use of the results of student opinion surveys.
(Recommended Further Reading: “Do student evaluations measure teaching effectiveness?” Philip Stark, University of California Berkeley Professor of Statistics, October 2013. http://blogs.berkeley.edu/2013/10/14/do-student-evaluations-measure-teaching-effectiveness/)