AI and Adolescent Well-Being: New APA Health Advisory
By Althea Need Kaminske
With Artificial Intelligence (AI) applications expanding rapidly into daily life, The American Psychological Association (APA) has issued a health advisory on the impacts of AI on adolescent well-being (1). The health advisory synthesizes research on different aspects of AI use and provides recommendations to maximize the benefits of AI while minimizing harm. The APA has previously released a similar health advisory on social media use in adolescence that is worth checking out as well (2).
Big Picture
The health advisory highlights the nuances of and complexities of AI use and, similar to social media, advises that it is not all “good” or all “bad”. AI and it’s uses are varied and nuanced, so too are adolescents. Adolescence is a long developmental period and the recommendations for how a 12 year old should use AI may not be the same as how a 16 year old should use AI. Similarly, adolescents vary greatly in maturity, temperament, neurodiversity, etc., and will have an array of different responses to content or online experiences. Put simply: it is difficult to make hard-line, one-size fits all recommendations.
Recommendations
Given the nuances noted above, the advisory has several different categories of recommendations. Below is a summary of each category, with calls to action for parents, caregivers, and educators in bold, and my comments on the recommendations in italics.
Ensure healthy boundaries with simulated human relationships
Image by Alexandra_Koch from Pixabay
Many AI chatbot platforms are designed to simulate human relationships and are marketed as companions or experts. The APA urges for safeguards to mitigate harm because 1) adolescents are less likely to question the accuracy and intent of the chatbot and 2) adolescents’ relationship with AI may displace or interfere with development of healthy, real-world relationships. The APA recommends:
Prioritizing the development of features that prevent exploitation, manipulation, and erosion of real-world relationships. For example, providing regular reminders that they are interacting with a bot or offering resources and suggestions to encourage human interactions.
Developing regulations to ensure that AI systems designed for adolescents protect mental and emotional health.
Parents, caregivers, and educators should discuss AI literacy with adolescents through programs that a) explain that not all AI-generated content is accurate, b) discuss the intent of some AI bots, and c) educate about indicators of misinformation.
AI for adults should differ from AI for adolescents
Adolescents are a particularly vulnerable group and as such AI programs designed for adolescents should be stringent. The APA recommends:
Age appropriate defaults
Transparency and explainability
Reduced persuasive design
Human oversight and support
Rigorous testing
Encourage uses of AI that can promote healthy development
AI can assist in brainstorming, creating, organizing, summarizing, and synthesizing information (3). Additionally, AI can provide scaffolding and personalized feedback (4). All of these features can enhance learning and development when used appropriately. That is, when AI is encouraging further elaboration and exploration of a topic, rather than short cutting it.
“To maximize AI’s benefits, students should actively question and challenge AI-generated content and use AI tools to supplement rather than replace existing strategies and pedagogical approaches.” (1)
As I’ve written about before, I have many doubts and criticisms of the whole-sale adoption of AI. One of the aspects that I am most concerned about is the potential to bypass meaningful and beneficial challenge. For example, when I discuss note-taking strategies with students I highlight that even the decision about what to take notes over is one of the first steps in an active learning process. Having AI generate a summary of notes deprives you of that initial learning opportunity. However, there are real-world time constraints and use cases that may mean that it’s less important for you to have that initial learning opportunity. I’m in favor of the APA’s guidelines here because they call for having a conversation about the pros and cons so that educators and learners can make that choice for themselves, rather than assuming that it’s either all “good” or all “bad”.
Limit access to and engagement with harmful and inaccurate content
Exposure to harmful content is associated with a number of poor mental health outcomes, like anxiety and depression. The APA recommends:
Developing robots protections for AI systems used by adolescents. This includes protections against content that is inappropriate, dangerous, illegal, biased and/or discriminatory, or may trigger similar behavior among vulnerable youth.
User reporting and feedback systems to customize content restrictions
Educational resources to help adolescents and caregivers recogize and avoid harmful content
Collaboration with mental health professionals, educators, and psychologists
Accuracy of health information is especially important
Adolescents often seek out health information online (5) and misinformation, or incomplete information, can lead to harmful behaviors and misdiagnosis among other negative outcomes. The APA recommends:
AI systems that provide health information should ensure the accuracy of the information and/or provide explicit and repeated warnings that there may be inaccuracies.
AI systems should provide clear and prominent disclaimers that AI-generated information is a not a replacement or substitute for professional health advice.
AI systems should provide resources and reminders to contact an educator, school counselor, pedatrician, or other approrpiate expert of authority to seek real-world help
Parents, caregivers, and educators should remind adolescents that health information provided by AI may not be accurate and may potentially be harmful.
I want to note that these recommendations come after APA met with the Federal Trade Commission to discuss the impersonation of mental health professionals by chatbots in February 2025. There are at least two lawsuits against an AI company after teenagers interacted with AI chatbots claiming to be licensed therapists. One of the cases tragically ended in suicide after prolonged interaction with the chatbot.
Image by Gerd Altmann from Pixabay
Protect adolescents’ data privacy
The APA recommends:
Maximizing transparency and user control and minimizing potential harm associated with data collection.
Limit the use of adolescents’ data for advertising, personalized marketing, the sale of user data to third parties, or for any use other than what it was explicitly collected for.
Transparency in data collection
Recognize that data collected by AI - including biometric and neural information from emerging technologies - can provide insight into mental states and cognitive processes
Protect likeness of youth
Adolescents’ likeness, including voice and images, can lead to the creation of harmful content used for cyberbullying and harassment. The APA recommends:
AI platforms should implement restrictions on the use of youths’ likeness to prevent the creation and dissemination of this content.
Parents, caregivers, and educators should discuss the dangers of posting images online and strategies to use when confronting images of peers or themselves that may be inappropriate or illegal.
Educators should consider policies to manage the creation and proliferation of hateful AI-generated content in schools.
Empower parents and caregivers
Parents and caregivers often have limited time or capacity to learn about emerging technologies, despite the vital role that they play in guiding and protecting adolescents’ interactions with these technologies. The APA recommends:
Industry stakeholders, policymakers, educators, psychologists, and other health professionals should collaborate to develop accessible and user-friendly resources to provide clear guidance on AI technologies. (I hope this post helps!!)
Customizable and accessible parental control settings and interactive tutorials should be included in AI platforms.
Implement comprehensive AI literacy education
AI literacy is crucial for adolescents who are navigating an increasingly AI-driven world. The APA recommends:
“Educators should integrate AI literacy into core curricula, spanning computer science, social studies, and ethics courses; provide teacher training on AI concepts, algorithmic bias, and responsible AI use; offer hands-on learning experiences with AI tools and platforms, emphasizing critical evaluation of AI-generated content; and facilitate discussions on the ethical implications of AI, including privacy, data security, transparency, possible bias, and potential societal impacts.” (1)
Policymakers should develop national- and state-level guidelines for AI literacy education.
Technology developers should create transparent and accessible explanations of AI algorithms and data collection practices.
Prioritize and fund rigorous scientific investigations of AI’s impact on adolescent development
More work is needed to better understand the nuances of various AI applications on adolescent development and well-being. The APA recommends:
Longitudinal studies
Research designs that help to identify causal relationships and long-term effects
Diverse population studies
Data accessibility and transparency
Interdisciplinary collaboration
Final Thoughts
I encourage parents, caregivers, and educators to read the full health advisory which goes into more detail. The advisory notes several times that we, by and large, fell short when it came to helping youth navigate social media. Many of the recommendations made in this health advisory are similar to the recommendations made in the advisory about social media, with some of the more pressing concerns surrounding privacy and tools to protect against and navigate online harassment and harmful content. My hope is that you feel empowered to ask the right questions and help the young people in your life navigate the challenges and potential that AI brings.
References:
American Psychological Association. (2025). Health advisory: Artificial intelligence and adolescent well-being. https://www.apa.org/topics/artificial-intelligence-machine-learning/health-advisory-ai-adolescent-well-being
American Psychological Association. (2023). Health advisory on social media use in adolescence. https://www.apa.org/topics/social-media-internet/health-advisory-adolescent-social-media-use
D’Mello, S. K., Biddy, Q., Breideband, T., Bush, J., Chang, M., Cortez, A., Flanigan, J., Foltz, P. W., Gorman, J. C., Hirshfield, L., Ko, M.-L. M., Krishnaswamy, N., Lieber, R., Martin, J., Palmer, M., Penuel, W. R., Philip, T., Puntambekar, S., Pustejovsky, J., Reitman, J. G., Sumner, T., Tissenbaum, M., Walker, L., &Whitehill, J.(2024). From learning optimization to learner flourishing: Reimagining AI in education at the Institute for Student-AI Teaming (iSAT). AI Magazine, 45(1), 61–68. https://doi.org/10.1002/aaai.12158
D’Mello, S. K., Duran, N., Michaels, A., & Stewart, A. E. B. (2024). Improving collaborative problem-solving skills via automated feedback and scaffolding: A quasi-experimental study with CPSCoach 2.0. User Modeling and User-Adapted Interaction, 34, 1087–1125. https://doi.org/10.1007/s11257-023-09387-6
Herriman, Z., Tchen, H., & Cafferty, P. W. (2025). Could be better: Adolescent access to health information and care. European Journal of Pediatrics, 184, Article 7. https://doi.org/10.1007/s00431-024-05868-x