When is Retrieval Practice Most Efficient?

When is Retrieval Practice Most Efficient?

(Cover photo by fruper on Pixabay)

by Cindy Nebel

In my position at a med school, my students use a lot of practice tests. They use them to prepare for exams in class, for Shelf exams during clinical rotations, and sometimes exclusively to prepare for board exams. One thing I’ve realized in giving advice to these students is that time matters. As someone who knows a lot about spaced practice and retrieval practice, I want to tell them how to review material in the most effective way, but frankly, they just don’t have time for it. Often when they use question banks for self-testing, they skip over feedback for correct answers and only keep the questions they got incorrect in rotation. This sure feels efficient, but I know from retrieval practice research that we often need to answer a questions multiple times for it to stick (1).

The research study I’m reviewing today (2) tackles this issue of effectiveness vs. efficiency. In this series of studies, participants either received retrieval practice that was experimenter-controlled, in the most effective way, or they were given the option to do retrieval practice in the way that they wanted, which was typically to drop questions in a more efficient way. Which one is better?

Image by MoteOo on Pixabay

Methods

In these studies, participants studied English-German word pairs in a couple of different ways:

  • Participant-controlled: In these groups, the participants got to decide after seeing each word pair whether they wanted to study it again, take a practice test, or drop the word pair from the “deck”. If they chose to take a practice test, they they either received a multiple choice question or a cued recall (short answer) question, depending on which condition they were in. After they gave their answer, they were told the correct answer and again asked if they wanted to study, test, or drop the item.

  • Experimenter-controlled: In these groups, participants first studied the words and then the items were divided into ones they got right 1, 3, or 5 times before they were dropped from the deck. Some of the participants got multiple choice tests and others got short answer tests, depending on which condition they were in.

Results - How do students self-test?

The first question that the researchers were asking was what students would do when given the choice in how to self-test. Perhaps unsurprisingly, once students got a question right, they dropped it from the deck. This was true across both multiple choice and short answer questions.

There were some small differences though. For example, participants restudied items more often when they knew they were going to get a short answer question vs. a multiple choice question, which again makes sense.

Results - What leads to be best retention?

For sake of simplicity, I’ll just share the main finding here, but it could certainly be broken down in a lot of different ways. In short, when looking at the participant-controlled conditions, there wasn’t much difference between multiple choice or short answer practice tests on the final tests. They were pretty much the same. But in the conditions where the experimenters had participants do 5 practice tests, they scored higher than in the comparable multiple choice conditions where they dropped questions after getting them right.

This makes sense and follows previous research showing that we need more than one correct attempt at retrieval practice for long-term retention. But wait! There’s more…

Results - Which type of practice test is more efficient?

Image by eslfuntaiwan on Pixabay

Participants spent less time doing their practice tests when they were multiple choice than when they were short answer, and because performance was about the same, multiple choice self-testing was shown to be the better choice when participant-controlled. But again, participants didn’t score as high when they dropped items after getting them right. On average, participants had to do 3 practice trials to get one right when they were in control, but it took about 7.4 times for participants to get a question right 5 times. And in the participant-controlled condition, they only spent about 10s per item, vs. 29s in the experimenter-controlled condition. That’s a lot more time spent self-testing when they were trying to answer them right 5 times!

The researchers calculated this by looking at “gains per minute” - that is, how much higher was performance on the final test for every minute spent working. On the final multiple choice test, the folks who controlled their own study gained 17% per minute of studying, while the folks who had to get questions right 5 times only had 7% gains per minute of studying. Wow.

Takeaway

What does this all mean? It’s not as simple as saying that one way of self-testing is better than another. What’s great about this study is that they didn’t simply look at final performance and conclude that the experimenter-controlled condition was better. Instead, they considered competing demands. Yes, long-term retention matters. It’s our goal as educators. But time matters too. We have a lot on our plates, a lot of material to get through, a lot of different activities we’d like to do to improve learning. Students have a lot on their plates too - multiple classes to study for, extracurriculars, relationships, life.

My takeaways from this study are that :

  1. Retrieval practice is effective!

  2. If you’re strapped for time, don’t sacrifice the retrieval practice, but maybe just keep working until you can recall everything once.

  3. If you have extra time, it’s not a bad idea to redo the questions you got right a few more times so you can make some extra gains.


References:

(1) Vaughn, K. E., Rawson, K. A., & Pyc, M. A. (2013). Repeated retrieval practice and item difficulty: Does criterion learning eliminate item difficulty effects? Psychonomic Bulletin & Review, 20, 1239-1245.

(2) Badali, S., Rawson, K. A., & Dunlosky, J. (2023). How do students regulate their use of multiple choice practice tests?. Educational Psychology Review, 35(2), 43.