Classroom Blunders: When Learning Science Goes Awry

Classroom Blunders: When Learning Science Goes Awry

Image from Pixabay

Image from Pixabay

By: Cindy Wooldridge

Sometimes we are contacted by instructors or students and are asked for a prescription for how to study or how to implement the strategies successfully. We have a lot of reasons why we can’t offer a simple answer to these questions, but today I want to offer you a personal anecdote about how applying the science of learning is not always as simple as it seems.

During the fall of 2015, I was starting a new position at Washburn University. I needed to get some research up and running, so I quickly threw together an idea that had been brewing for some time. We know that retrieval practice is an effective way of promoting retention, but it’s such a pain to write high quality daily quizzes and grade them. What if, instead, the students wrote and answered their own quizzes? This method could possibly create an efficient and effective means of quizzing in the classroom.

Here’s what we did… With the help of some amazing graduate students (I’m looking at you, Eileen!), students across four of my courses received four different types of quizzing, one during each quarter of the semester.

The conditions included:

1) No quizzes (as a baseline)

2) Standard quizzes – At the end of each class, I asked a single short-answer question that required approximately 1-2 sentences to answer and covered a major concept from the day’s lesson.

3) Student-written quizzes – The students were asked to generate a single short-answer question from the day’s lesson and to provide the correct answer.

4) Traded quizzes – The students were asked to generate a single short-answer question, which was collected and given to another student to answer.

After all conditions, students were encouraged to ask questions in order to give immediate feedback.

We hypothesized that students writing their own questions would benefit from the generation of the question and that the traded conditions would have the added benefit of additional retrieval.

The result? Epic failure. Not only did the quizzing conditions do no better than the baseline, but the student-written questions actually did significantly worse. That is, by trying to implement retrieval practice in my classroom, I had actually hurt my students!

So why did it happen? We have a few theories (and a lot of hindsight bias), but the one with the most support from our additional analyses is one of item-specificity. The quizzes I wrote were not significantly lower than baseline as were the traded and student-written quizzes. When we looked at the questions the students wrote, they were very factual and often covered a single item from either the very beginning or end of class (this should be no surprise to memory students), which could be answered with a single word or phrase. The questions I came up with in the moment were less specific and usually involved multiple concepts from the lesson that needed to be considered to answer. The fact that students narrowed in on a single concept when writing and answering their own questions may have led to some inhibition of the remaining concepts in that lesson, whereas my questions required students to consider the lesson as a whole, leading to a boost for much of the information. However, it should be noted that this wasn’t true of all of my quizzes, and I may have caused some inhibition as well.

There were several valuable lessons to be learned from this blunder.

Perhaps the most important is that classroom research is not exactly straightforward. When doing research in the classroom, we are counting on students to do what we want them to do, but there are so many variables we cannot control. My students undoubtedly studied this material outside of class, making my quizzing conditions just one small intervention within a huge number of stimuli. A lot more careful consideration of how this intervention would work within the classroom was needed.

Another very important take-away is that learning science is not one size fits all. Just because we say retrieval practice works, doesn’t mean it works in all scenarios and under all circumstances.

This is why it’s so important to be skeptical. Use objective measures to assess whether and how a teaching strategy is working for your students and take time to do some reflection on how and why it worked (or didn’t). This is another great example of a time when my intuition said that this absolutely should work, but we should follow the evidence and not just intuition.

Finally, there are lessons to be learned from making mistakes. Ok, so this didn’t work out. But I tried something and I learned something and my future students will benefit from the lessons that I learned. Had I just implemented quizzing without assessing it, I could have been inadvertently hurting my students. Had I not implemented quizzing at all, I wouldn’t have learned how that relational processing can help them. One of the hurdles for the new crops of students coming into higher ed is understanding that it’s ok to stumble, because sometimes we learn more from blunders than from successes.

Weekly Digest #64: Preparing a Learning-Focused Syllabus

Weekly Digest #64: Preparing a Learning-Focused Syllabus

GUEST POST: Building Effective Learning Strategies into a Mathematics Curriculum

GUEST POST: Building Effective Learning Strategies into a Mathematics Curriculum