The Learning Scientists

View Original

GUEST POST: Leveraging AI in a Research-Driven Way: Augmenting feedback during spaced retrieval practice using ChatGPT

By: Joshua Ling

Josh is a former middle school math teacher and Teach for America alum who believes deeply in the ability of every student to succeed. After getting a master's in Education from Stanford University, he made a career change into software engineering. He then leveraged his classroom experience and software development skillset to build a free tool for teachers and students. To support the development of this tool, Josh incorporated Podsie as a nonprofit organization, and recently, Podsie became a part of a larger nonprofit organization, Teaching Lab. In his free time, he likes hanging out with his wife and their dogs Stewie & Duckie. Back in 2021, Josh and his co-founder had the privilege of joining Megan on The Learning Scientists podcast to chat about Podsie, and published a blog about connecting teachers and students to learning science

If you're a teacher who believes in the power of evidence-based learning strategies, we invite you to try Podsie for free! You can also stay up-to-date with what Podsie’s doing by following them on Instagram, Twitter, or Linkedin.


If you use edtech in your classroom, you’ve probably seen at least one of the tools you use recently advertise their brand new “AI feature.” 

At Podsie, the core of what we’re building has always been research-driven, so as the AI hype rages on, we’ve been asking ourselves, “How can we leverage the recent advances in AI and LLMs in a research-driven way?” Put more concretely, are there research-backed teaching practices that AI makes easier to facilitate? 

What’s Podsie?

Before diving into that question, let me first zoom out a bit to explain what Podsie is.

Podsie is a free web app that allows teachers to provide personalized, automated spiraled review for each student throughout the school year. It’s grounded in the highly effective, evidence-based strategies of spacing (1), retrieval (2), interleaving (3), and personalized review (4)

At a high level, here’s how it works:

  1. On the Podsie web app, teachers assign questions to students on something they’d recently learned.

  2. Students practice these questions (retrieval).

  3. Each question is then inserted into a cumulative Personal Review Deck that tracks student mastery of each question over time. It then determines the optimal next time for the student to review that question (spacing). 

  4. Over time, as students learn more in a class and get assigned more questions on Podsie, each question for that subject cumulatively gets mixed in with other questions they’ve already learned (interleaving).

  5. By the end of the school year, the Personal Review Deck may have hundreds of questions accumulated (depending on how much content your course covers). Still, at any given moment, each student would only focus on practicing the questions that they need to practice (personalized review). 

Quick aside: If you want to learn more about Podsie, feel free to check out any of the following: our  1-minute overview video, our podcast episode on The Learning Scientists, or our former guest post on The Learning Scientists

Podsie’s area of growth: providing better feedback for students

While we think we’re doing a good job of enabling teachers and students to use the strategies mentioned above easily, there’s one other evidence-based principle we’d like to facilitate better: providing feedback.

Research has shown that retrieval practice yields better learning outcomes if relevant feedback, such as the correct answer, is provided (5). That’s why on Podsie, the answer is shown to students after each student's retrieval attempt. However, further studies indicate that just showing the correct answer isn't enough when it comes to helping students tackle new inference questions that require a deeper understanding of the original underlying concept (6). Instead, providing an explanation is needed to ensure that students can be successful on other related questions. Concretely, we’ve also noticed many instances on Podsie where even after showing the correct answer, the student will still miss that question on subsequent attempts, especially for questions on a higher order than simple recall on Bloom’s Taxonomy (7)

To address this problem, Podsie allows teachers to add an “explanation” that gets displayed after the student answers the question. Unfortunately, adding an explanation can be time-consuming, so as of right now on Podsie, of the 72k questions teachers have created, only 26% (19k) have explanations.

As a result, we often see cases like the one below, where even though the student is putting forth a lot of effort, the results could be better. In this case, the student has now missed this question 4 times in a row since first seeing the question 19 days ago. 

Image is a screenshot from Podsie, provided by author

As you can see here, for a question like this, just seeing the correct answer is often not enough to help fill in potential gaps in understanding. So…

Leveraging AI for Learning

One feature we recently implemented is adding the ability for teachers to click a button on Podsie and use AI to generate an explanation for each question:

See this content in the original post

We’re excited about leveraging LLMs to make it easier for teachers to add helpful explanations for each question, but we’re even more excited about the potential of using LLMs to provide dynamic and more targeted feedback for students.

Dynamic and targeted feedback

For example, let's consider a middle school math classroom where students use Podsie to practice solving 2-step equations. A student misses the equation -2x + 3 = -5. Here, let’s suppose that a teacher used the AI Generate feature to generate a step-by-step explanation of how to solve the problem.

However, while this feedback provides the student with a potentially helpful explanation, it still falls short in some regards.

Why? Because each student's misunderstanding of the question may differ. One student may not understand how to isolate the variable x, while another may struggle with negative integer operations. Providing the same explanation to all students may not target the root cause of their mistakes.

To tackle this issue, we've added a feature that incorporates LLMs to provide a more nuanced, dynamic form of feedback. The idea is to gauge the student's specific errors or misconceptions. Podsie could then offer targeted guidance that addresses the student's unique difficulties. So, for the student who struggles with isolating variables, the feedback might say, "Remember, the goal is to get x by itself on one side of the equation," followed by an intentional explanation of how to do so in the context of this problem. For the student who struggles with negative integers, Podsie could offer an explanation that targets this issue:

See this content in the original post

In Summary

Incorporating AI into educational tools often promises a revolution in teaching and learning. However, at Podsie, we are careful to ensure that what we’re building is grounded in empirical research. By augmenting Podsie with AI-generated dynamic and targeted feedback, we aim to further facilitate research-backed teaching practices. Our ultimate goal is not just to be an edtech company that uses AI, but one that leverages it in a meaningful, student-centric way to enrich the learning experience for all.

If you're a teacher who believes in the power of evidence-based learning strategies, we invite you to try Podsie for free! Help us take the next step in shaping a more personalized and effective educational experience for students everywhere. Simply sign up on our website, and you can start leveraging the benefits of research-driven automated review in your classroom today.


References:

(1) Kang, S. H. (2016). Spaced repetition promotes efficient and effective learning: Policy implications for instruction. Policy Insights from the Behavioral and Brain Sciences, 3(1), 12-19. https://doi.org/10.1177/2372732215624708

(2) Roediger, H. L., Putnam, A. L., & Smith, M. A. (2011). Ten benefits of testing and their applications to educational practice. In J. Mestre & B. Ross (Eds.), Psychology of learning and motivation: Cognition in education (pp. 1-36). Elsevier.

(3) Samani, J., & Pan, S. C. (2021). Interleaved practice enhances memory and problem-solving ability in undergraduate physics. NPJ science of learning, 6(1), 32. https://doi.org/10.1038/s41539-021-00110-x

(4) Lindsey, R. V., Shroyer, J. D., Pashler, H., & Mozer, M. C. (2014). Improving students’ long-term knowledge retention through personalized review. Psychological Science, 25(3), 639-647. https://doi.org/10.1177/0956797613504302

(5) Shute, V. (2008). Focus on formative feedback. Review of Educational Research, 78, 153–189. doi:10.3102/0034654307313795

(6) Butler, A. C., Godbole, N., & Marsh, E. J. (2013). Explanation feedback is better than correct answer feedback for promoting transfer of learning. Journal of Educational Psychology, 105(2), 290–298. https://doi.org/10.1037/a0031026

(7) Agarwal, P. K. (2019). Retrieval practice & Bloom’s taxonomy: Do students need fact knowledge before higher order learning? Journal of Educational Psychology, 111(2), 189–209. https://doi.org/10.1037/edu0000282