GUEST POST: How Test Expectancy Promotes Learning

GUEST POST: How Test Expectancy Promotes Learning

MR_headshot.jpg

By Michelle Rivers

Michelle Rivers is a PhD student studying Cognitive Psychology at Kent State University in Ohio.  Her research applies theories of learning and memory to enhance educational practice.  She is particularly interested in investigating factors that contribute to students’ metacognitive judgments and how learners understand and manage their own learning.  She can be found on Twitter at @meta_michelle and her website is https://sites.google.com/view/michellerivers/.

If you’re an educator, there’s no doubt your students have asked questions like, “Are we going to be tested on this?” and “Is the exam going to be essay or multiple-choice?” Why do students want these answers? One possibility is that students will use them to decide how to prepare for an upcoming exam, either by studying more in anticipation of an exam, or by studying differently depending on what they know about the exam. For example, students might pay more attention to the material they are learning when they know they will eventually be tested on it, or they might adjust their study strategies in anticipation of different types of exams. Evidence illustrating both possibilities shows why answering these questions may help students prepare for exams.

 Image from Pexels

Image from Pexels

Do students learn more when expecting an exam?

Simply expecting a test of any kind can lead to enhanced processing of studied material, such as by reducing learners’ mind-wandering during studying (1) or by reducing interference from previously studied information (2). Researchers at Washington University found that test performance is higher when students expect a final cumulative test compared to when they do not expect a final test (3). Szpunar, McDermott, and Roediger (2007) had college students study sets of five word lists and recall the words after each list was presented. At the end of the experiment, the students were asked to recall all of the words across the five lists (similar to a cumulative exam at the end of many courses). Whereas one group of students was warned about this upcoming cumulative test, another group was not. The authors found that students who were expecting a final test performed better than those who were not, even though the students were not provided with an opportunity to restudy the material. Other researchers have similarly found increased performance for learners expecting either a multiple-choice or essay test (compared to students with no test expectation) after reading a fictional passage (4). Although students tend to despise cumulative exams, instilling this expectation can lead to learning gains.

 Image from iStock

Image from iStock

Do students study differently when expecting different types of exams?

Finley and Benjamin (2012) answered this question by presenting college students with lists of word pairs (e.g., trumpet – planet) to learn and varying the type of test students expected to receive and the type of test they actually received. The format of the tests students received after each list varied across students: Whereas some students received a cued-recall test (i.e., recall the target word, planet, when presented with the cue word, trumpet), other students received a free-recall test (i.e., recall as many target words as possible, in any order). Critically, on the test of the final list, students either received a test they had expected (i.e., cued recall if they had been receiving cued-recall tests for the prior lists, or free recall if they had been receiving free-recall tests for the prior lists), or a test they did not expect (i.e., free recall if they had been receiving cued-recall tests for the prior lists, or cued-recall if they had been receiving free-recall tests for the prior lists). If students are able to adjust their studying in anticipation of a particular test format, students who receive a test that aligns with their expectations should outperform those who receive a test in a format that violates their expectations.

This pattern of results is exactly what the authors found (5). Of the students who received a final cued-recall test, performance was better for those who expected cued recall than free recall. Similarly, on the final free-recall test, performance was better for those who expected free recall than those who expected cued recall. This performance difference was driven by the types of study strategies students used in anticipation of particular test formats. Students who expected a cued-recall test tended to use cue-target association strategies, such as imagining the two words interacting (e.g., picturing a trumpet blowing out mini planets). In contrast, students who expected a free-recall test tended to focus on memorizing only the target words. These students were able to “learn how to learn” for the anticipated tests, but the strategies they adopted were specific to the type of test they expected.

 Results from Finley and Benjamin (2012), Experiment 1.

Results from Finley and Benjamin (2012), Experiment 1.

The idea that learners perform best when they receive a test in line with their expectations has also been supported by research using more educationally-relevant materials in both the lab and classroom (6,7). For instance, Colak (2015) had college students study chapters from a Psychology textbook and were told to expect either a multiple-choice or an essay exam on the material. Test performance was better for students who received a test they expected versus a test they did not expect, both when the test was given immediately after studying and at a 2-week delay. Additionally, students believe that test format matters – on a survey assessing the most important factors for grades, test format ranked third, behind the amount of time spent studying and the subject being tested (6).

Format is not the only information about exams that matters for students. Exams can also vary in the level of questions asked, which can influence how students prepare for exams. Jensen et al. (2014) had two sections of students in a Biology course complete weekly quizzes. The two sections differed in the level of questions asked on the quizzes: Students in one section were asked questions that required them to recall basic information (i.e., low-level questions), whereas students in the other section answered questions that required them to deeply understand the concepts they were learning or apply them in new ways (i.e., high-level questions). At the end of the course, students in both sections received a final exam consisting of both high- and low-level questions. Performance on the high-level exam questions was better for students who received the high-level quizzes compared to students who received low-level quizzes throughout the course (8). These results suggest that students adjusted their studying based on the level of questions they expected. Students expecting low-level questions probably focused on memorizing terms and definitions whereas students expecting higher-level questions likely focused their studying on integrating and applying the material they were learning. Thus, if our goal as educators is for students to develop deep conceptual understanding of the material they are learning, we should make sure our assessments match this goal.

Conclusions and Recommendations

Answering questions about an upcoming exam can be frustrating for instructors, as we’d much rather have our students ask about the course content. However, the research reviewed above suggests that telling students the type of test they should expect (e.g., multiple-choice or essay) and the target learning goal (e.g., memory, comprehension, or application) can benefit their learning. One way to instill test expectancy is to administer low-stakes practice tests that mimic high-stakes exams. Providing quizzes that are similar to upcoming exams can lead learners to adopt strategies that are tailored to that particular exam. And though being told to expect an upcoming test can be helpful to learners, actually gaining experience with the test leads to the greatest adoption of optimal learning strategies (9). Plus, as regular readers of this blog know, low-stakes quizzes will further enhance students’ learning through retrieval practice!


References

(1) Szpunar, K. K., Khan, N. Y., & Schacter, D. L. (2013). Interpolated memory tests reduce mind wandering and improve learning of online lectures. Proceedings of the National Academy of Sciences, USA, 110, 6313–6317.

(2) Weinstein, Y., Gilmore, A. W., Szpunar, K. K., & McDermott, K. B. (2014). The role of test expectancy in the build-up of proactive interference in long-term memory. Journal of Experimental Psychology: Learning, Memory, and Cognition40(4), 1039.

(3) Szpunar, K. K., McDermott, K. B., & Roediger, H. L., III. (2007). Expectation of a final cumulative test enhances long-term retention. Memory & Cognition, 35, 1007–1013.

(4) McDaniel, M. A., Blischak, D. M., & Challis, B. (1994). The effects of test expectancy on processing and memory of prose. Contemporary Educational Psychology19(2), 230-248.

(5) Finley, J. R., & Benjamin, A. S. (2012). Adaptive and qualitative changes in encoding strategy with experience: Evidence from the test-expectancy paradigm. Journal of Experimental Psychology: Learning, Memory, and Cognition38(3), 632.

(6) Colak, B. (2015). Learning on anticipated multiple choice or essay tests: An experimental design. (Master’s thesis). Retrieved from ProQuest. (UMI # 1526429)

(7) Lundeberg, M. A., & Fox, P. W. (1991). Do laboratory findings on test expectancy generalize to classroom outcomes?. Review of Educational Research61(1), 94-106.

(8) Jensen, J. L., McDaniel, M. A., Woodard, S. M., & Kummer, T. A. (2014). Teaching to the test… or testing to teach: Exams requiring higher order thinking skills encourage greater conceptual understanding. Educational Psychology Review26(2), 307-329.

(9) Storm, B. C., Hickman, M. L., & Bjork, E. L. (2016). Improving encoding strategies as a function of test knowledge and experience. Memory & Cognition44(4), 660-670.

How Does Question Difficulty Order Affect Evaluations of Test Performance?

How Does Question Difficulty Order Affect Evaluations of Test Performance?

Weekly Digest #108: What Can We Learn From High-Performing Education Systems?

Weekly Digest #108: What Can We Learn From High-Performing Education Systems?