This article is part of a Special Section of On Board with Professional Psychology that focuses on the intersection of professional psychology and Artificial Intelligence (AI). Learn more about ABPP’s Artificial Intelligence Task Force.
As artificial intelligence (AI) continues to permeate various sectors, its integration into education has become a focal point for faculty aiming to enhance teaching and research while guiding students in the ethical use of these technologies. Psychology is beginning to embrace AI and its benefits for clinical practice (APA, 2023; 2024) and training (APA, 2025). Faculty are simultaneously intrigued by the potential of AI tools and insistent on an accountability system that deters their unauthorized use. All the while, many professors are just as confused as students about what is permissible when it comes to AI, and many are wrestling with how to adjust competency assessments now that AI use is ubiquitous and difficult to detect unless directly observed (Chaudhuri & Trainor, 2025; Rowsell, 2025). According to chief technology officers, about half of institutions of higher learning do not give students institutional access to generative AI, but growing student and employer expectations about AI use raise questions about equity and workforce preparedness (Flaherty, 2025). Alongside cost, CTOs cited ethical concerns such as academic integrity and data security as reasons why they do not provide AI tools to students. Given that many institutions have not yet adopted formal policies to guide faculty and students in responsible AI use, this article explores how educators can leverage AI tools to improve learning outcomes and research capabilities, while also addressing the importance of teaching students about the responsible use of AI.
The Role of AI in Education
AI technologies, particularly generative AI tools like ChatGPT, have transformed the educational landscape since their introduction. These tools can assist in various aspects of teaching, from enhancing student engagement to streamlining administrative tasks. Faculty members are increasingly recognizing the potential of AI to support their teaching methodologies and research initiatives (Lin & Chen, 2024). Simultaneously, we must re-examine how we conceptualize and assess the competencies students should demonstrate with and without the help of AI technologies. Questions arise as to what basic competencies students need to demonstrate without AI first, even if they will take short cuts later. We are challenged to prepare students for a professional identity in a rapidly evolving marketplace, as some of our core activities (e.g., CBT-based support for depression, anxiety, and substance misuse) may now be performed by AI-driven mental health apps like Cogniant and Woebot (Prochaska, et al., 2021), and therapeutic chatbots that make mental health supports more accessible and less costly (Torous et al., 2021). Artificial intelligence technologies have the potential to augment the work of academic psychologists by developing student writing, reducing administrative burdens, creating educational materials, providing individualized feedback, and facilitating research.
Enhancing Teaching Practices
Improving Student Writing
Faculty can design assignments that require students to interact with AI tools to refine their writing skills. By creating effective prompts for AI, students learn to communicate their ideas clearly and effectively. Students are now using AI to develop outlines for research papers, find and summarize literature, and hone their professional vocabulary and tone. Students report mixed impacts of AI use in their academic work on their creativity and emotions; while AI can enhance engagement, for example by gamifying studying, using generative AI to improve their writing can lead to emotional disengagement and anxiety (Lin & Chen, 2024). In clinical settings, automated note generation tools can enhance clinical documentation quality, reduce errors, and support better clinical decision making (Hutnyan et al., 2024; July & Esmali, 2024). According to a September 2024 APA survey (Page et al., 2025), writing assistance was the most common use of AI reported by psychologists, followed by content generation, and then by article summarization.
Reducing Administrative Burdens
Faculty and academic administrators may also find AI tools useful to reduce time spent on administrative tasks by drafting emails and grant proposals, generating action items from meetings, as well as handling personal tasks like planning meals and travel. The second author was able to quickly incorporate reviewer feedback on this article to add citations, in that ChatGPT transformed a list of 6 URLs into an APA-formatted reference list in 3 seconds. Voice-enabled AI-based Virtual Assistants like Siri, Google Assistant (Gemini), Alexa and others can help with practical tasks such as setting reminders and timers, as well as answering factual questions quickly. This support can improve workflow efficiency and allow for more time and energy for other activities including meetings with students and collaborators. Clinical administrative tasks can potentially be handled more quickly with AI support, including writing progress notes, assessment reports, recommendations and treatment plans for clients, which could in turn free up clinicians to see more clients or engage in other revenue generating activity, or possibly reduce burnout and enhance clinical intervention quality (APA, 2023; Stringer, 2025). Indeed, the top benefit of incorporating AI into clinical practice, endorsed by about a third of psychologists, was improved operational efficiencies (Page et al., 2025).
Creating Educational Materials
AI can assist educators in preparing instructional materials, enabling faculty to generate visually appealing presentations quickly. Tools like MagicSlides and SlidesGPT allow them to focus more on content delivery rather than design (Murray, 2023). AI can also quickly generate multiple forms of exams and unique case vignettes that meet given criteria to reduce time burden for faculty. For example, it took Chat GPT 30 seconds to provide the second author with 16 EPPP-style multiple choice questions based on a graduate-level Lifespan Development course text and assigned articles along with the correct answer, rationale, and citation, which she could then review and check for accuracy and representativeness of the knowledge to be assessed much more quickly than generating the exam by hand. Finally, AI-driven platforms can be used to adapt content and support professors’ efforts to tailor instructional strategies to individual student needs (Boaler & Sherwood, 2024).
Providing Individualized Feedback
AI tools can analyze student submissions and provide tailored feedback. For instance, AI applications in educational settings have shown potential to enhance creativity by offering immediate, constructive feedback on assignments (Lin & Chen, 2024). While not replacing the ultimate judgment of professors, these tools may offer the potential to save time by conducting an initial review of lengthy written assignments, admission applications, or videos when trained to use detailed rubrics. AI technologies may be able to support supervisors by flagging key moments in session videos for review (Hutnyan et al., 2024) or making searchable session transcripts quickly available for review.
The Chicago School recently piloted the use of AI-chat bot standardized patients (alongside roleplays with classmates) to help first year PsyD students practice their clinical interviewing skills before beginning to see their first practicum clients (Garrison, 2024). While the first iteration was limited to text interaction, virtual reality avatar practice patients are not far behind. Students’ experiences were mixed. Benefits of this approach over the use of classmates alone for roleplays is that roleplays with AI “clients” can be done without the scheduling burden of meeting with a classmate, it can allow for skill practice without evoking as much anxiety and imposter syndrome as practice with classmates, and AI “clients” may be trained to behave more like real clients and even to provide some real-time initial feedback to the student. However, the roleplays with AI did not provide all the benefits of the roleplays with classmates, leaving much to be desired when it came to the nonverbal aspects of the clinical encounter.
Facilitating Research
AI can aid faculty in their research by analyzing large datasets, identifying trends, generating insights, and making predictions that would be time-consuming to uncover manually (CPA, 2024; Upton, 2024). This capability allows researchers to focus on interpreting results and developing innovative solutions (Lin & Chen, 2024). AI-powered summaries of scientific papers, transcription software, and tools such as ATLAS.ti offering automatic coding of qualitative data based on researcher parameters can save hours of time for academics and students. Clinicians can also use AI to rapidly find and summarize clinical literature, more efficiently staying up to date with best practices. AI may eventually integrate data to optimize prediction of outcomes such as psychosis and suicide, ultimately enhancing the quality of clinical care psychologists can provide (APA, 2023; 2025).
Promoting Ethical Use of AI
While the benefits of AI in education are significant, the use of AI raises questions about safety, bias, privacy, and equity (APA, 2024; CPA, 2024; Flaherty, 2025). According to a September 2024 APA survey (Page et al., 2025), a slight majority (52%) of psychologists did not know the benefits of using AI in practice, but most did have concerns about the lack of rigorous testing and validation of AI tools (51%), biased input leading to biased output (54%), unanticipated social harms (54%), and potential breaches of sensitive data (59%). It is crucial for faculty to instill a sense of ethical responsibility in students regarding AI usage. This involves teaching students about the limitations and potential biases of AI systems, as well as the importance of academic integrity, informed consent and confidentiality where patient information is concerned, and the impacts of AI use on environments and marginalized groups. Users should understand that AI tools reflect the data they are trained on and know whether data entered into AI tools is stored locally or in the cloud and if it is used to train AI models (Abrams, 2025). Faculty guidance on ethical decision-making should also include discussions about sustainability and minimizing wasteful use of LLMs, given their high energy costs and environmental impacts (Hall, 24). Ultimately, critical thinking is essential to consider whether AI is necessary and whether its benefits outweigh its costs or risks for a given use. Clear guidelines, teaching AI literacy, addressing bias and inaccuracy, and faculty collaboration can all support optimal AI use.
Guidelines for AI Use
Institutions are encouraged to develop clear policies outlining when and how students can use AI tools in their work, including proper attribution and understanding the implications of relying on AI for academic tasks (Henderson, 2023). Training programs using AI detection tools must use caution and critical consideration of their imperfections. It is also important for training directors and faculty in academic programs to clearly communicate expectations up front to students and practicum supervisors about any restrictions on students’ use of AI tools that may be used at the practicum site. Student clinicians will need faculty and supervisors to model how to obtain informed consent when using AI in clinical settings as well as understanding who can access client data and what to do if clients want to retract their data.
Faculty can model and teach students how to use trustworthy sources, APA ethical principles, and critical thinking to make decisions about AI use. APA (2024) developed guidance on considerations for psychologists as they review the proliferation of AI-enabled tools marketed for mental health professional use. In 2025, APA published Ethical Guidance for AI in the Professional Practice of Health Service Psychology, which drew heavily upon the APA Ethical Principles. The guidance emphasized: 1) Transparency and Informed Consent, 2) Mitigating Bias and Promoting Equity, 3) Data Privacy and Security, 4) Accuracy and Misinformation Risk, 5) Human Oversight and Professional Judgement, and 6) Liability and Ethical Responsibility. Similarly, faculty may do well to communicate expectations about AI use in academic work in relation to these same ethical principles, standards related to competence, and other profession-wide competencies such as professional values and communication skills. Zara (2025) provided links to further reading and additional resources for safely and ethically harnessing AI to improve education and clinical care.
Teaching AI Literacy
Faculty must prioritize building AI literacy among students. This includes understanding how AI works, its applications, optimizing its output, and its ethical implications. For instance, initiatives at universities have focused on enhancing AI literacy among faculty and students, emphasizing the strengths and limitations of AI technologies (Henderson, 2023). Scholars need to understand that the accuracy and usefulness of the output is contingent on the quality of the queries. For example, it can help to specify that ChatGPT cite and link to peer-reviewed publications within a specific time frame when generating a response, but it is also important to check the AI’s work.
Addressing Bias and Inaccuracy
Educators should emphasize the importance of critically evaluating AI outputs. AI systems can exhibit inaccuracies and biases (often centering Western thought), which can mislead users and lead to harm if not properly understood (Chaudhuri & Trainor, 2025; CPA, 2004). For example, Ross (2025) described a recent study finding that AI research bots tend to overgeneralize, summarizing research findings with unwarranted confidence and omitting nuance, uncertainty, and limitations. Unfortunately, telling bots to “stay faithful to the source material” and “not introduce any inaccuracies” yielded double the generalized conclusions compared to only asking them to “provide a summary of the main findings.” Complex responses may have been rated less helpful when bots were optimized to be “helpful”, ultimately training these tools to not express uncertainty about information outside their knowledge. The resulting generalizations could mislead and spread misinformation, especially considering the polished and authoritative appearance of outputs. Responsible use of AI acknowledges its potential for inaccuracy and bias; it uses AI to support research and not replace critical thinking. Teaching students to approach AI-generated content with a critical eye fosters responsible usage and enhances their analytical skills (Lin & Chen, 2024). Human oversight is an ethical imperative to maximize the good that can be derived from integrating AI into scholarly and clinical work, and to minimize harm (CPA, 2024).
As recognized experts in our fields, board-certified specialists may be in privileged positions to lead critical evaluation of AI tools and uses yet to come for potential issues related to bias, cultural appropriateness, and equity. Upon taking office in 2025, the Trump administration revoked the Biden administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. In March of this year – four years after the APA (2021) apologized for its role in perpetuating racism – the APA Committee on Accreditation (2025) stopped evaluating programs for specific standards pertaining to diversity in recruitment, admission, and retention, in response to executive order. However, the APA retained the requirement that trainees learn to understand and respect individual and cultural differences as a core element of quality professional service provision. Specific strategies to reduce the potential for bias and harm include testing models for generalizing tendencies, using technologies that produce more faithful summaries, adjusting the bot’s “temperature” setting (which controls the randomness of generated text), and strong AI literacy among psychologists and students (Ross, 2025). Faculty should also teach students to recognize the role that AI may play in shaping societal change via the creation and perpetuation of misinformation and bias (Garimella & Chauchard, 2024), and encourage critical consumption of its outputs and source verification.
Faculty Development and Collaboration
To effectively integrate AI into teaching and research, faculty development is essential. Institutions are forming interdisciplinary working groups to explore innovative practices and policies related to AI. The ethical use of AI has been a focus of recent professional development offerings and meetings (e.g., Garrison, 2024; Hutnyan et al., 2024; July & Esmali, 2024), with an overarching message that these tools offer a promising starting place but not the finish line. APA (2024) released a policy statement about Artificial Intelligence and the Field of Psychology detailing how psychologists can shape AI technologies and their impact on our profession. Collaborative efforts not only support faculty in adapting to new technologies but also create a community of practice where educators can share insights and strategies for effective AI integration (Lin & Chen, 2024), as well as reliable strategies and tools for AI detection. Psychologists must have a voice in centering respect, safety, integrity and equity in the development and critical use of AI tools (APA, 2024). We must have a place at the table in the ongoing development of regulation and policy, and the application and teaching of ethical decision-making models for our students. Faculty can use our own AI literacy development as an opportunity to model core professional values of life-long learning and ethical consultation for our trainees.
Conclusion
The integration of AI into education presents both opportunities and challenges. Faculty play a crucial role in facilitating the appropriate use of AI by students, while enhancing their teaching, writing, and research capabilities. Academic psychologists must ensure that our assessments of trainees are measuring what they are intending to measure – and consider whether these are the same skills needed in a future marketplace in which AI can do some of the things we have been requiring students to do – without being dominated by discussions about cheating when students use AI to complete assignments (Rowsell, 2025). By developing clear guidelines, promoting AI literacy, and fostering a collaborative environment, educators can prepare students to engage with AI responsibly and effectively and prepare them for professional identities as psychologists in a rapidly evolving healthcare landscape. We will redefine the irreducible core of our distinct human contributions as psychologists (Evans, 2025). The future work of psychologists with AI will be more than a faster replication of what psychologists have been doing; psychologists-plus-AI will be integrating data from more sources to make more accurate and culturally sensitive diagnoses, more personalized recommendations, more accessible and scalable preventive programs, and better integrated systems of care, of which AI tools are only part (Torous & Blease, 2024).
Board-certified specialists may be well poised to speak to unique considerations for each of our specialties and the unique settings and systems (e.g., healthcare, education, business, forensic and public safety) in which we work. The ABPP Artificial Intelligence Task Force was recently formed to help educate and guide specialists (McCutcheon & Day, 2024). Some specialty boards (e.g. American Board of Clinical Neuropsychology, American Board of Rehabilitation Psychology) have created policies for the use of AI in practice samples for board certification in their specialty. As AI continues to evolve, ongoing professional development, adaptation, ethical decision-making, and collaboration will be paramount in shaping the future of education.
References
Abrams, Z. (2023, July 1). AI is changing every aspect of psychology. Here’s what to watch for. Monitor on Psychology, 54, 46. https://www.apa.org/monitor/2023/07/psychology-embracing-ai
Abrams, Z. (2025, January 1). Artificial intelligence is impacting the field: As AI transforms our world, psychologists are working to channel its power and limit its harm. Monitor on Psychology, 56, 46. https://www.apa.org/monitor/2025/01/trends-harnessing-power-of-artificial-intelligence
American Psychological Association. (2021, October 29). Apology to people of color for APA’s role in promoting, perpetuating, and failing to challenge racism, racial discrimination, and human hierarchy in the U.S. https://www.apa.org/about/policy/racism-apology
American Psychological Association. (2025, March 12). Artificial intelligence in mental health care. https://www.apa.org/practice/artificial-intelligence-mental-health-care
American Psychological Association. (2025, June). Ethical guidance for AI in the professional practice of health service psychology. https://www.apa.org/topics/artificial-intelligence-machine-learning/ethical-guidance-ai-professional-practice
American Psychological Association. (2024, August). Artificial intelligence and the field of psychology. https://www.apa.org/about/policy/statement-artificial-intelligence.pdf?OR=Word.
American Psychological Association. (2024, October 25). APA’s AI tool guide for practitioners. https://www.apaservices.org/practice/business/technology/tech-101/evaluating-artificial-intelligence-tool?utm_source=apa.org&utm_medium=referral&utm_content=/practice/artificial-intelligence-mental-health-care
American Psychological Association Commission on Accreditation. (2025, March 21). Addressing accredited program questions about the enforcement of diversity accreditation standards [Memorandum]. https://irp.cdn-website.com/a14f9462/files/uploaded/Message_from_the_APA_CoA.pdf
Boaler, N. & Sherwood, N. (2024, July 31). The evolving role of technology in educational psychology. Spencer Clarke Group. https://www.spencerclarkegroup.co.uk/career-hub/blog/the-evolving-role-of-technology-in-educational-psychology/
Chaudhuri, A., & Trainor, J. (2025, April 30). Three laws for curriculum design in the AI age (opinion). Inside Higher Ed. https://www.insidehighered.com/opinion/views/2025/04/30/three-laws-curriculum-design-ai-age-opinion
Canadian Psychological Association. (2024, January). Artificial intelligence and psychology. https://cpa.ca/docs/File/CPD/Artificial%20Intelligence%20and%20Psychology%20EN%202024.pdf
Evans, A. C. (2025, July 1). Redefining psychology in a rapidly changing world. Monitor on Psychology, 56, 10. https://www.apa.org/monitor/2025/07-08/ai-population-health-redefining-psychology
Flaherty, C. (2025, April 21). The digital divide: Student generative AI access. Inside Higher Ed. https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2025/04/21/half-colleges-dont-grant-students-access
Garimella, K., & Chauchard, S. (2024). How prevalent is AI misinformation? What our studies in India show so far. Nature, 630(8015), 32–34. https://doi.org/10.1038/d41586-024-01588-2
Garrison, L. (2024, November). AI enhanced clinical psychology training: A future forward approach. Texas Psychological Association Annual Convention.
Hall, S. (2024, August 22). In teaching with Gen AI, consider sustainability. Inside Higher Ed. https://www.insidehighered.com/opinion/views/2024/08/22/gen-ai-discussions-must-prioritize-sustainability-opinion#
Henderson, J. (2023, December 4). Artificial intelligence in teaching and learning. UDaily. University of Delaware. https://www.udel.edu/udaily/2023/december/artificial-intelligence-ai-teaching-learning/
Hutnyan, M., Elamin, R., Pollard, S., Branaman, T., Gottlieb, M., Palomares-Fernandez, R., Marrie, E., & Avalos, K. (2024, November). Artificial intelligence: Ethical considerations and approaches. Texas Psychological Association Annual Convention.
July, W., & Esmali, M. (2024, November). Ethical use of AI progress note and report writing apps. Texas Psychological Association Annual Convention. https://tpa.ce21.com/item/ethical-ai-progress-note-report-writing-apps-130442
Lin, H., & Chen, Q. (2024). Artificial intelligence (AI)-integrated educational applications and college students’ creativity and academic emotions: Students’ and teachers’ perceptions and attitudes. BMC Psychology, 12, 487. https://doi.org/10.1186/s40359-024-01979-0
MagicSlides. (n.d.). MagicSlides: AI-powered presentation tool. https://www.magicslides.app/
McCutcheon, J., & Day, J. R. (2024). Introducing the ABPP Artificial Intelligence Task Force. On Board with Professional Psychology, 3.
Murray, R. B. (Host). (2023, November 20). Using AI Tools to Prepare Slide Decks. [Audio podcast episode]. In The Pulse. Inside Higher Ed. https://www.insidehighered.com/podcasts/pulse/2023/11/20/using-ai-tools-prepare-slide-decks
Page, C. Assefa, M. & Stamm, K. (2025, July 1). What psychologists are saying about using AI in practice. Monitor on Psychology, 56, 25. https://www.apa.org/monitor/2025/07-08/ai-use-psychological-practice
Prochaska, J. J., Vogel, E. A., Chieng, A., Kendra, M., Baiocchi, M., Pajarito, S., & Robinson, A. (2021). A therapeutic relational agent for reducing problematic substance use (Woebot): Development and usability study. J Med Internet Res. doi: 10.2196/24850
Ross, J. (2025, April 24). AI research summaries ‘exaggerate findings,’ study warns. Times Higher Education. https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2025/04/24/ai-research-summaries-exaggerate-findings
Rowsell, J. (2025, February 28). AI: Cheating matters but redrawing assessment ‘matters most’. Times Higher Education. https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2025/02/28/ai-cheating-matters-redrawing-assessment
SlidesGPT. (n.d.). SlidesGPT: AI-generated presentations from text. https://slidesgpt.com/
Stringer, H. (2025, January 1). Technology is reshaping practice to expand psychology’s reach. Monitor on Psychology, 56, 66. https://www.apa.org/monitor/2025/01/trends-technology-shaping-practice
Torous, J. & Blease, C. (2024, January 12). Generative artificial intelligence in mental health care: Potential benefits and current challenges. World Psychiatry, 23, 1-2. https://doi: 10.1002/wps.21148
Torous, J., Bucci, S., Bell, I., Kessing, L. V., Faurholt-Jepsen, M., Whelan, P., Carvalho, A. F., Keshavan, M., Linardon, J., & Firth, J. (2021). The growing field of digital psychiatry: Current evidence and the future of apps, social media, chatbots, and virtual reality. World Psychiatry, 20, 318-335. https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/wps.20883
Upton, K. (2024, December 2). Artificial intelligence in psychology: How machine learning is changing the field. All Psychology Schools. https://www.allpsychologyschools.com/blog/ai-in-psychology-how-machine-learning-is-changing-the-field/
Timothy Branaman, PhD, ABPP
Board Certified in Forensic Psychology
Correspondence: drtimbranaman@gmail.com
Sara Pollard, PhD, ABPP
Board Certified in Couple and Family Psychology
Correspondence: spollard@thechicagoschool.edu