Incentives for Course Evaluations
By Cindy Nebel
Over the years, I have heard a lot of different opinions about course evaluations. Here are a few of the (likely misquoted) gems:
Student 1: I don’t fill out course evaluations because no one reads them anyway.
Student 2: Yessssss, time to roast Professor Y!
Dean: How do you get all of your students to fill out course evaluations? I want to create a policy that all the faculty have to abide by so we can get 100% participation.
Colleague 1: I offer extra credit and give students time in class to fill out course evaluations so that they will all participate.
Colleague 2: I’m not offering any incentives for course evaluations because I don’t want the grumpy students to fill them out and we all know the good students are the ones who will actually do it!
Colleague 3: I have to bake cookies tonight because students are filling out course evaluations in class tomorrow.
Around this time last year, I wrote a blog summarizing the pros and cons of course evaluations. I concluded that course evaluations probably aren’t going anywhere anytime soon and that we should all try to embrace them as formative assessments for our own teaching. Today, I want to follow-up with this theme. Let’s assume that course evaluations are here to stay. Which of the colleagues above will end up with the best evaluations (all other things being equal)? Should you offer extra credit (a *gasp* bribe) for their completion? Do only the good students voluntarily fill out evaluations? Thankfully, we can look to the research literature for answers to some of these questions.
Should you incentivize evaluations
In order to understand why this is an issue in higher education, you may need some background information. There has been a big push in the past decade or so to move all course evaluations to online systems (1). There are lots of benefits to doing this, including ease of access and reporting, as well as creation of national averages and statistics. At my university, we can now compare our scores to those at comparable universities and in comparable courses (e.g. only undergraduate courses of a similar size). These create possible benchmarks for comparison. (Aside: The averages themselves might make arguably poor benchmarks as it is not mathematically possible for everyone to be above average.)
However, the huge downside to online evaluations is how to get the students to actually take them. Several research studies have found that there are much lower response rates using online evaluations compared to paper-and-pencil forms that must be filled out during class (2,3). Even though Colleague 2 might not want the “grumpy” students to complete the evaluations, in order to capture the best picture of student impressions, those students’ opinions must be included.
Many educators have therefore turned to incentives to produce higher response rates and, indeed, extra credit in course classes is one of the most successful ways of producing higher response rates (4). Of course, there is a certain ethical consideration in offering extra credit to gain evaluative feedback. Students might adjust their responses knowing that they are receiving an award for providing them.
Eric Sundstrom and colleagues recently published an article examining the utility of micro-incentives as a possible solution to this ethical dilemma (5). Their university converted to online evaluations in 2010. They could therefore examine response rates using paper and pencil, online evaluations, and incentive-based online evaluations. The incentive was considered a micro-incentive because of the value of the extra credit. Students were offered exactly one point of extra credit for completion, which was the equivalent of .24% of the total course grade. If 70% of the class completed the evaluation, everyone would receive two points (.5%) of extra credit.
What they found was that paper and pencil methods had about 60% response rates. After introducing the online evaluation, response rates tanked to about 30%. Micro-incentives vastly improved response rates to about 80%.
They also examined the quality of the evaluations. Over time, the courses were taught by the same instructor with the same assignments and grading criteria, and the evaluations were identical on the ground and online. When examining questions about overall instructor effectiveness the means were almost identical for semesters with incentives and those without. (And Colleague 2 was so worried...)
There are some limitations to this study. Instructors might have higher evaluations over time and students may get used to a certain type of evaluation. Essentially, stuff happens over time, which makes time an important confound in this type of design. So, Sundstrom and colleagues ran another study, this time randomly assigning several sections of a single course in a single semester to receive incentives or not. Their results were very similar. Incentive sections had response rates of about 85% and no incentive sections had an average response rate around 25%. And again, there was no significant difference in overall evaluation of the instructors (even though this study had different instructors in the sections)!
Once again, this post is not meant to argue for the value of course evaluations. Given that course evaluations are a part of life in higher education and that presumably readers of this blog are interested in improving their teaching, we ought to attempt to get the best feedback possible from our course evaluations. In order to understand all of our students' needs and opinions, we need high response rates on our evaluations. In our previous evaluation post, we discussed how to improve your evaluations. Here, my suggestion is that you consider offering a very tiny amount of extra credit to improve your response rates as well. But don't worry, Colleague 2, your ratings shouldn't go down.
(1) Berk, R. A. (2012). Top 20 strategies to increase the online response rates of student rating scales. International Journal of Technology in Teaching and Learning, 8, 98–107.
(2) Morrison, R. (2011). A comparison of online versus traditional student end-of-course critiques in resident courses. Assessment & Evaluation in Higher Education, 36, 627–641.
(3) Stowell, J. R., Addison, W. E., & Smith, J. L. (2012). Comparison of online and classroom-based student evaluations of instruction. Assessment & Evaluation in Higher Education, 37, 465–473.
(4) Dommeyer, C. J., Baum, P., Hanna, R. W., & Chapman, K. S. (2004). Gathering faculty teaching evaluations by in-class and online surveys: Their effects on response rates and evaluations. Assessment & Evaluation in Higher Education, 29, 611–623.
(5) Sundstrom, E.D., Hardin, E.E., & Shaffer, M.J. (2016). Extra credit micro-incentives and response rates for online course evaluations: Two quasi-experiments. Teaching of Psychology, 43, 276-284.