Skip to content
  • Start
  • Account
  • Help
  • Login
  • Search
American Board of Professional Psychology logo design
Youtube Linkedin Facebook Instagram X-twitter
American Board of Professional Psychology logo design
Youtube Linkedin Facebook Instagram X-twitter
  • Home
  • Find a Specialist
  • Applicants/Candidates
    • Learn about Specialty Boards
    • Certification Benefits
    • Application Process
    • General Requirements
  • News & Events
    • Headline News
    • Events & Conferences
    • AITF and AI Related Resources
    • On Board with Professional Psychology
    • Awards
    • Continuing Education Information
    • Related Organizations
    • Videos
    • International Projects
  • Foundation
    • Donate Now!
    • Mission Statement
    • Scholarships
    • Donor Information
    • Officers
  • About Us
    • Our History & Myths
    • Our Leaders
    • Central Office
    • Academies
    • Diversity
    • Mobility & Licensure
    • ABPP Acronyms
    • Partners
  • Specialists
    • Annual Attestation Process
    • Committee Zoom Meetings
    • Community News Email List
    • Continuing Education
    • Ethics Education & Consultation
    • Get Involved!
    • Maintenance of Certification (MOC)
    • Specialty Board Officers
  • Home
  • Find a Specialist
  • Applicants/Candidates
    • Learn about Specialty Boards
    • Certification Benefits
    • Application Process
    • General Requirements
  • News & Events
    • Headline News
    • Events & Conferences
    • AITF and AI Related Resources
    • On Board with Professional Psychology
    • Awards
    • Continuing Education Information
    • Related Organizations
    • Videos
    • International Projects
  • Foundation
    • Donate Now!
    • Mission Statement
    • Scholarships
    • Donor Information
    • Officers
  • About Us
    • Our History & Myths
    • Our Leaders
    • Central Office
    • Academies
    • Diversity
    • Mobility & Licensure
    • ABPP Acronyms
    • Partners
  • Specialists
    • Annual Attestation Process
    • Committee Zoom Meetings
    • Community News Email List
    • Continuing Education
    • Ethics Education & Consultation
    • Get Involved!
    • Maintenance of Certification (MOC)
    • Specialty Board Officers
  • On Board with Professional Psychology, Issue 7
  • Bias in AI: Ethical Considerations for Use of AI in Psychological Practice
  • Article

Bias in AI: Ethical Considerations for Use of AI in Psychological Practice

  • Date created: December 18, 2025
  • Issue 7
Specialists discuss how psychologists can avoid harm and bias while integrating AI into their work.

This article is part of a Special Section of On Board with Professional Psychology that focuses on the intersection of professional psychology and Artificial Intelligence (AI). Learn more about ABPP’s Artificial Intelligence Task Force.

The use of artificial intelligence (AI) has vast potential in psychological practice including in advertising, content creation, communication, virtual or calendar assistance, record review, transcription, and research. However, as with any new tool or approach, the excitement of integrating it into your professional practice needs to be tempered with first understanding the potential limitations and considerations to ensure that you stay well within ethical guidelines. It is important to understand what AI is and what it is not.   

A Brief Introduction to AI Platforms and How Error is Introduced and Perpetuated 

AI refers to machines that simulate human intelligence and AI systems can perform tasks that typically require human cognition. AI creates its magic by pulling together information from preexisting data sets with the goal of solving a specific problem it has been “asked” to solve. The content generated by AI is based on what it thinks is “likeliest,” which may or may not be factually accurate. This is because AI learns from patterns in data rather than from a true understanding of facts. The reality is that at this juncture, we do not have a full understanding of how AI algorithms actually work. This is referred to as the “black box” phenomenon.

What we do know is that AI systems are trained on an initial dataset and then the system evolves over time as new data is received through use of the system. For example, ChatGPT, a large language model, continues to learn and integrate new information published online by users. As such, each AI system and platform is a microcosm of society and a reflection of its creators, contributors, and users. The developers’ own beliefs, values, stereotypes, and knowledge gaps are included in the development of the algorithm. The AI system “learns” these perceptions of society and interprets them as “truth” to replicate. As a result, discriminatory patterns or biased metrics in the initial dataset can become encoded into the algorithm’s response patterns. This process results in the algorithm perpetuating bias and inequity.

 Manifestations of Bias in AI 

Bias can present in many forms within AI platforms, with notable social biases in gender, racial, sexual orientation, and ability. Gender bias in AI includes subservient voice assistants being given female voices and names, hiring algorithms that give preference to male over female candidates, and AI-generated images portraying women as sexualized and smiling (O’Connor & Liu, 2024; Shah, 2024). Racial bias includes AI facial recognition tools being more accurate for white men and less accurate for women of color, AI-simulated mental health crises resulting in different outcomes based on patients’ race, and name discrimination in housing and loan applications (Timmons, et al., 2022; Zhang & Han, 2021). Social media content by LGBTQIA+ users is flagged by AI twice as often as inappropriate as compared to heterosexual users’ content. AI training data often fails to include the perspectives and needs of people with disabilities, and accessible content is not consistently created (Urbina, & Nguyen, 2025). Overall, AI algorithms tend to favor privileged groups and disadvantage those with less social privilege because AI that is trained on data from a majority group is going to be less accurate for and biased against minoritized groups.

Keeping an Ethical Lens in the Use of AI 

A great resource to begin exploring the ethical use of AI in psychological practice is the recently published APA Ethical Guidance for AI in the Professional Practice of Health Service Psychology (APA, 2025) and the Companion Checklist for an AI-Enabled Clinical or Administrative Tool (APA, 2024). Keeping an eye on how use of this tool in psychology practice is beneficial and avoids harm is the foundation of ethical use of AI. There are some evolving rules of thumb to facilitate your choice and use of AI being directed towards beneficence and away from malfeasance.

When selecting an AI platform, psychologists need to be thoughtful about the type and purpose(s) for the platform as well as who was engaged in the development process. We need to understand what dataset was used for training and whether the parameters and breadth and depth of data is representative of your client or patient or situation, thus mitigating the influence of bias. Identifying whether there are psychologists who are serving on the platform development board and/or involved in the development process may assist you in determining which AI platform is best matched for the use(s) that you are considering in your practice. Additionally, using fairness-assessing tools such as Aequitas, Fairness, What-If Tool, or Themis-AI can help detect and adjust for bias in the AI platforms we select to use in practice. Knowing information about the dataset can assist you in choosing the best match for your practice given how you would like to use AI approaches and will help you to stay well aligned with our guiding ethical principles.

Control over the information that we use in our daily practice of psychology is foundational to adhering to principles as psychologists. We have a primary obligation and take reasonable precautions to protect confidential information obtained through or stored in any medium. When choosing an AI platform to use, psychologists should understand that platforms that are operated in the cloud have limited privacy protections. Even though many AI platform providers argue that they restrict data use and thus offer US standard HIPAA-compliant services, it remains unclear whether any cloud-based service truly maintains confidentiality and privacy because psychologists do not ultimately maintain control of the data. To date, best practice recommendation is to run core AI systems locally, either on desktop machines or on organizational servers for ensuring AI privacy.

Once you have chosen an AI platform to use in your practice, transparency is the next rule of thumb to keep in mind. Ensure that your clients are aware of what aspects of services you are going to use AI for, your reasons for doing so, and how clients’ data will be used and protected. Incorporating this information into your informed consent and disclosures process ensures that you are being transparent and allows your clients, patients, and colleagues to make an informed decision about engaging with your psychological practice. All of our clients have the right to autonomy and to make decisions about how their information will be used. Psychologists must keep in mind, however, that to truly garner informed consent, the client needs to be educated about the basics of the AI platform that you are using, how the platform will be used in the services you are providing to them, how you will ensure that their data remains protected, including procedures to anonymize the transmitted data (even in a locally-operated system). The psychologist must be able to explain the risks and benefits of use of AI because trust in AI, depends not only on performance but also on transparency and explainability. Taking an opt-in consent approach in which the client ultimately decides whether they understand AI and want to provide consent for its use in the services rendered to them offers the strongest privacy protection.

We can all serve as advocates for ethical use of AI. We can educate other mental health providers and interdisciplinary colleagues about AI’s potential for bias to reduce the chance of harm being done. For example, we could inform court stakeholders who use AI algorithms for risk assessment, detention, and sentencing decisions about the potential bias built into such algorithms because they were trained on data from a historically biased legal system (Bains, 2024). We can educate the developers of AI algorithms and software to be aware of the presence of bias, how bias can be perpetuated by AI, and ways to reduce bias. As mental health experts, we could serve on AI advisory boards and provide developers with feedback to ensure that diverse viewpoints, ethics, and experiences are integrated. We can advocate for developers to implement inclusive hiring practices and examine training data, modeling, and output for disparate impact. Finally, we can encourage developers to use fairness-assessing tools and set fairness metrics and definitions for their platforms.

In summary, AI is continually evolving at a dizzying pace, and we will have to stay up-to-date and tailor our approaches over time. Understanding the potential benefits and risks of AI including concerns about accuracy, data security, and bias will keep our use of this tool in alignment with our ethical values. Keeping an eye on integrating fairness and reducing bias in all aspects of our psychological practice is a core ethical responsibility.  

References

Bains, C. (2024). The legal doctrine that will be key to preventing AI discrimination. Brookings. https://www.brookings.edu/articles/the-legal-doctrine-that-will-be-key-to-preventing-ai-discrimination/

Companion checklist: Evaluation of an AI-enabled clinical or administrative tool. (2024).

American Psychological Association. https://www.apaservices.org/practice/business/technology/tech-101/evaluating-artificial-intelligence-tool-checklist.pdf

Ethical guidance for AI in the professional practice of health service psychology. (2025). https://www.apa.org/topics/artificial-intelligence-machine-learning/ethical-guidance-professional-practice.pdf

O’Connor, S., & Liu, H. (2023). Gender bias perpetuation and mitigation in AI technologies: Challenges and opportunities. AI & Society , 39, 2045–2057. https://doi.org/10.1007/s00146-023-01675-4

Shah, S. (2025). Gender bias in artificial intelligence: Empowering women through digital literacy. Premier Journal of Artificial Intelligence, 1–13. https://doi.org/10.70389/pjai.1000088

Timmons, A. C., Duong, J. B., Simo Fiallo, N., Lee, T., Vo, H. P. Q., Ahle, M. W., Comer, J. S., Brewer, L. C., Frazier, S. L., & Chaspari, T. (2022). A call to action on assessing and mitigating bias in artificial intelligence applications for mental health. Perspectives on Psychological Science, 18(5), 1062–1096. https://doi.org/10.1177/17456916221134490

Urbina, J. T., Vu, P. D., & Nguyen, M. V. (2024). Disability ethics and education in the age of artificial intelligence: Identifying ability bias in ChatGPT and Gemini. Archives of Physical Medicine and Rehabilitation, 14–19. https://doi.org/10.1016/j.apmr.2024.08.014

Zhang, J., & Han, Y. (2022). Algorithms have built racial bias in legal system-Accept or not? Advances in Social Science, Education and Humanities Research, 631, 1217–1221. https://doi.org/10.2991/assehr.k.220105.224

Kathleen Bechtold, AI Ethics Expert. Headshot of smiling woman for article on Bias in AI in Psychological Practice.

Kathleen T. Bechtold, PhD, ABPP

Board Certified in Rehabilitation Psychology & Clinical Neuropsychology
Correspondence: dr.kathleen.bechtold@gmail.com

Jennifer Christman, PsyD, expert on AI's ethical considerations in psychology, smiling in business attire. AI bias discussion.

Jennifer Christman, PsyD, ABPP

Board Certified in Forensic Psychology
Correspondence: DrChristman@ForensicPsySolutions.com

More news:

OBPP Welcomes New Associate Editor
OBPP Bids Farewell to Editor
Update from the ABPP President
Introducing “Beyond the Couch: AI in Psychology”: A New Podcast from the ABPP Artificial Intelligence Task Force
Neuropsychology of Women: A New Resource for Providers to Better Address Their Women Patients’ Needs
Resources and Consultation for Specialists from the VA’s National Center for PTSD
Announcing the 2025 ABPP Foundation Scholarships Recipients
Artificial Intelligence Biomarkers in Mental Health Care: Moving Towards A New Era of Personalized Treatment
© 2026 American Board of Professional Psychology. All rights reserved.