I'm a Teacher Who Loves Quizzing; But Does Quiz Format Matter?
By: Megan Smith & Yana Weinstein
In a previous 2-part blog post (part 1, part 2), we discussed the benefits of frequently quizzing our students. Of course, providing quizzes in class can mean a lot of different things. I (Megan) mentioned that I give my students extra credit quizzes. Here’s what this process looks like in my classes: At the beginning of some lectures (students don’t know which lectures until it happens!), I put a few questions up on the board and ask my students to write the answers to the questions. The questions I ask are usually broad and guide the students to describe and explain concepts from a previous lecture. I may ask students to describe the two opposing theories, explaining how they are similar and how they are different. Next, I may ask the students to provide an example of an experiment that provided evidence for one theory or the other. Finally, I may ask them to explain how the two opposing theories are reconciled within the literature. I usually give the students around 10 minutes to answer the questions, and then collect them. Once they are all collected, I call on students to answer the questions out loud. We talk about the answers, and I explain anything that the students are confused about.
I (Yana) tend to incorporate quizzes into most of my lecture classes. I do this by picking 5 key concepts that I want the students to come away with that day, and writing short-answer questions related to these concepts. I then put these questions, one each, on a slide, and stick each of the 5 slides into the lecture shortly after I teach each concept (I have also experimented with putting all the questions at the end of each lecture – in fact, my paper on this just came out (1) and it will be the topic of my next post!). The quizzes are fairly casual in that students are not forbidden from looking back at their notes to answer the questions; however, from my observations, most tend to attempt the questions from memory. Also, although the quizzes form part of the students’ grades, it is not a large enough part that it stresses them out, and I make sure the grading scheme is generous enough that most get in the A-B range as long as they are paying attention; a low grade on my in-class quizzes is serious cause for concern.
This is all well and good, but we realize that there are some practical barriers to this type of activity in some classrooms. Classes at Rhode Island College, where Megan teaches, are relatively small and capped at 30 students. At UMass Lowell, classes are also quite small, and Yana has a TA who can help with grading. However, we are no strangers to the large lecture. When Megan taught at Utah State University Eastern, she had closer to 80 students. As undergraduates at Purdue University and Warwick University in the UK, we sometimes attended larger lectures of hundreds of students, even up to 450! And even in our 30-student classes, grading the few paragraphs written by each student and providing written feedback for each student can be time-consuming.
Here is the good news: a fair amount of research has looked at different practice quiz formats (e.g., short answer vs. multiple choice), and the data suggest there are relatively small differences in the amount of learning produced by these different formats. For example, in one study, I had students read various passages about history topics (e.g., the KGB) and then take a quiz with either short-answer, multiple-choice, or hybrid questions – or simply read the passage (2).
On a short-answer question, the students were given a prompt and had to produce the answer on their own. On a multiple-choice question, the students were given a prompt and five different choices, and they had to choose the correct answer among the choices. On a hybrid question, the students were first given a prompt and asked to produce the answer on their own. After producing the answer, they clicked a button the computer, and five different choices appeared; as in the multiple-choice conditions, students were at this point asked to select the correct answer among the choices. Finally, the control group just read the passage and then saw key statements from the passage (they corresponded to the questions the quizzing groups saw). Students then came back 1 week later and I tested them to see how well they had learned the material from the previous week. All of the groups that took any type of quiz performed better than the no-quiz control. However, there were only extremely small differences between the different quiz formats.
We ran four experiments like this, and then calculated the overall effect size (Cohen’s d) comparing the quiz groups to the control, and the various quiz format groups to one another. Doing this allowed us to compare the effects of retrieval practice against norms of what is a “big” effect and what is a “small” effect (for example, check out Hattie (2)). In general, an effect of d = 0.2 is considered to be small, and an effect of d = 0.8 is considered to be large. The overall effect of practicing retrieval, compared to the control of reading, across all of our experiments was always at least d = 1.22. That’s really big! Yet the overall effect sizes between quiz formats was around d = .05, or very, very small.
So what’s the take-away message? Any type of quizzing that encourages retrieval should help students learn. You can ask your students to write out what they know, you can give them multiple-choice questions, you can give them some combination – at least for increasing learning it doesn’t seem to matter!
But of course, this is science, and science is messy. In a follow-up post, we’ll discuss some potential pitfalls of multiple-choice questions, and how to avoid them.
(1) Weinstein, Y., Nunes, L. D., & Karpicke, J. D. (2016). On the placement of practice questions during study. Journal of Experimental Psychology: Applied, 22, 72-84.
(2) Smith, M. A., & Karpicke, J. D. (2014). Retrieval practice with short-answer, multiple-choice, and hybrid tests. Memory, 22, 784-802.
(3) Hattie, J. A. (2009). Visible learning: A synthesis of over 800 meta-analyses related to achievement. Oxen, OX: Routledge.