This article is part of a Special Section of On Board with Professional Psychology that focuses on the intersection of professional psychology and Artificial Intelligence (AI). Learn more about ABPP’s Artificial Intelligence Task Force.
Advancements in artificial intelligence (AI) and biomarker science are revolutionizing our understanding and treatment of mental health conditions. While AI can identify hidden patterns in large and complex datasets, biomarkers—measurable biological signals, such as cortisol levels, heart rate variability, and brainwave activity—provide insight into the body’s internal emotional and cognitive states. These indicators show how stress, mood, and trauma leave physiological marks, enabling clinicians to go beyond self-report and monitor mental health in real time. Together, AI and biomarkers are transforming the way we diagnose, personalize, and intervene in mental healthcare (Poalelungi et al., 2023).
Biomarkers in Mental Health
Biomarkers, whether genetic, neuroimaging-based, or physiological, are becoming indispensable tools in the evolving landscape of psychological assessment and intervention. Unlike traditional diagnostic methods, which rely primarily on observed behaviors, self-measures, and clinical interviews, biomarkers offer objective indicators of underlying biological processes that contribute to mental health conditions. These indicators may include stress-related hormones (e.g., cortisol), inflammatory markers such as C-reactive protein (CRP), and neural activity patterns observable through techniques like functional magnetic resonance imaging (fMRI) or EEG.
Biomarkers can aid in identifying individuals at increased risk for disorders like depression (García-Gutiérrez et al., 2020), PTSD, or schizophrenia, often before the full onset of symptoms. They can provide insight into treatment response and disease progression. For example, indicators of dysregulation of the hypothalamic-pituitary-adrenal (HPA) axis and elevated proinflammatory cytokines have been linked to chronic depression (Strawbridge, Young, & Cleare, 2017), while neuroimaging can reveal structural abnormalities in brain regions implicated in psychotic disorders.
Despite their promise, the use of biomarkers in mental health has historically been limited by concerns around specificity, reproducibility across populations, and clinical applicability. However, this landscape is shifting. The integration of artificial intelligence (AI) and machine learning enables researchers to analyze large-scale, multidimensional datasets, uncovering biological signatures that traditional methods might miss (Alowais et al., 2023). AI facilitates the identification of latent phenotypes, predictive risk profiles, and individualized treatment pathways, ushering in a more biologically informed, data-driven approach to psychological care.
This evolution reflects a broader movement toward “precision psychology,” where clinical decision-making is enhanced by real-time, biologically grounded insights that complement clinical expertise and the patient’s lived experience.
The Role of AI In Biomarker Discovery and Application
AI’s ability to analyze complex and large datasets has made it an invaluable tool in the field of biomarker discovery. Machine learning (ML) algorithms can process vast amounts of biological data, from genetic sequencing to neuroimaging, to identify patterns that would otherwise be difficult to discern. These patterns can lead to the discovery of new biomarkers that are highly predictive of mental health disorders, allowing for more accurate diagnoses.
García-Gutiérrez et al. (2020) highlight how AI is being utilized to enhance the understanding of psychiatric disorders. Their research emphasizes the importance of categorizing biomarkers into diagnostic, prognostic, and monitoring categories, with AI playing a critical role in integrating these markers into clinical practice. For example, biomarkers related to brain-derived neurotrophic factors (BDNF) or cortisol levels can help predict susceptibility to depression or anxiety. AI systems can monitor these biomarkers over time to provide early warnings, enabling clinicians to intervene before the condition worsens. Research by Nigatu, Liu, & Wang (2016) has taken this a step further by using AI to analyze primary care data and identify individuals at high risk of developing depression up to two years before diagnosis. This predictive capability represents a significant advancement in preventive mental health care, enabling earlier intervention and improved outcomes.
Ethical Considerations in AI and Biomarker Integration
As artificial intelligence and biomarker technologies become more integrated into mental health care, they bring critical ethical concerns to the forefront. How do we ensure that innovations capable of detecting deeply personal biological signals also safeguard privacy and consent? Issues of transparency, equity, and data protection must be addressed proactively to ensure that progress in care does not come at the expense of patient trust and dignity.
Privacy and Data Security
AI tools often collect and analyze sensitive health information, including heart rate, cortisol levels, facial expressions, and neuroimaging data. This type of data can reveal deeply personal mental states and may expose patients to reputational or psychological harm if not adequately protected. However, many commercial health apps fail to meet robust privacy standards, and some have shared user data with third parties without providing complete transparency (Martinez-Martin et al., 2018). Clinicians are ethically obligated under the APA Ethics Code to protect confidentiality and ensure that technology partners do the same. Legal protections such as HIPAA exist, but enforcement may lag behind technological innovation (Price & Cohen, 2019).
Informed Consent
Informed consent for AI-enhanced interventions must go beyond a general explanation. Patients should be informed precisely what biomarkers are being collected (e.g., sleep patterns, skin conductance, or salivary cortisol), how the data is gathered (e.g., through wearables or smartphone apps), and what the AI system does with it, such as identifying stress levels or predicting depressive episodes.
For example, tools like Google Health and Apple HealthKit are increasingly used in psychological treatment through biofeedback applications. These platforms passively collect physiological data and apply algorithmic models to interpret mental states. Patients should be informed of how their data is stored, who can access it, and the limitations of AI-generated insights (Bawja et al., 2021). These outputs are probabilistic, not diagnostic, and may reflect training bias if the AI was not developed using diverse, representative datasets (Cross, Choma, & Onofrey, 2024).
Psychologists must also clarify whether AI is assisting or guiding clinical decision-making and be transparent about any risks of over-reliance on automation. Ethical care includes preserving patient autonomy and ensuring that technology serves, rather than replaces, human judgment.
Equity and Bias
AI systems in mental health care are only as effective as the data on which they are trained. When datasets lack demographic diversity or are skewed, algorithms can misclassify symptoms or yield inaccurate predictions for other groups. This perpetuates disparities in mental healthcare rather than resolving them. This concern is mirrored in biomarker research. Inflammatory markers such as C-reactive protein (CRP) and interleukin-6 (IL-6) have been linked to increased risk for depression (Liu, 2024). However, baseline levels of these biomarkers vary across racial and ethnic groups, with Black Americans consistently showing higher CRP levels, likely due to chronic exposure to social and environmental stressors, not biological inferiority (Xiong et al., 2024). Using uniform biomarker thresholds risks introducing diagnostic bias and leading to inappropriate treatment planning.
These issues recall the legacy of Henrietta Lacks, whose cells were used without consent. While today’s challenges are more complex, the underlying concern remains: the use of biological data without sufficient consideration for context, consent, and equity. Without intentional correction, both AI and biomarker-based psychiatry risk codifying systemic inequities into clinical decision-making.
In efforts to advance equity, mental health technologies must be trained on inclusive datasets, and biological metrics should be interpreted within a psychosocial framework that accounts for race, environment, and lived experience, not just cellular readouts.
Collaboration: The Key to Advancing AI and Biomarkers in Mental Health
The future of AI and biomarkers in mental health care hinges on interdisciplinary collaboration. Clinical psychologists, neuropsychologists, environmental psychologists, AI developers, and public health professionals must collaborate to design interventions that are not only effective but also ethically and socially responsive. Mental health cannot be understood in isolation from social, cultural, and economic contexts. Factors such as poverty, systemic discrimination, environmental exposure, and access to care shape both symptom expression and treatment response.
As psychologists, we must embrace AI as a powerful tool to augment our understanding of mental health, while maintaining a critical commitment to equity. Rigorous validation, ethical safeguards, and culturally inclusive design are essential to ensure that AI and biomarker-driven care serve diverse populations without reinforcing existing disparities.
Invitation to Collaborate
We invite psychologists, healthcare professionals, and AI experts to join us in exploring the promising intersection of AI and biomarkers for mental health care. Together, we can pave the way for more personalized, practical, and ethical approaches to treatment. If you are interested in collaborating on this exciting journey or would like more information about ABPP’s Artificial Intelligence Task Force, please contact Dr. Jeni McCutcheon at jenimccutcheon@aol.com. Let’s work together to shape the future of mental health care.
References
American Psychological Association. (2017). Ethical principles of psychologists and code of conduct (2002, amended June 1, 2010, and January 1, 2017). https://www.apa.org/ethics/code
Alowais, S. A., Alghamdi, S. S., Alsuhebany, N., Alqahtani, T., Alshaya, A. I., Almohareb, S. N., Aldairem, A., Alrashed, M., Bin Saleh, K., Badreldin, H. A., Al Yami, M. S., Al Harbi, S., &
Albekairy, A. M. (2023). Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC medical education, 23(1), 689. https://pmc.ncbi.nlm.nih.gov/articles/PMC10517477/
Bajwa, J., Munir, U., Nori, A., & Williams, B. (2021). Artificial intelligence in healthcare: transforming the practice of medicine. Future Healthcare Journal, 8(2), e188–e194. https://doi.org/10.7861/fhj.2021-0095
Cross, J. L., Choma, M. A., & Onofrey, J. A. (2024). Bias in medical AI: Implications for clinical decision-making. PLOS digital health, 3(11), e0000651. https://doi.org/10.1371/journal.pdig.0000651
García-Gutiérrez, M. S., Navarrete, F., Sala, F., Gasparyan, A., Austrich-Olivares, A., & Manzanares, J. (2020). Biomarkers in psychiatry: Concept, definition, types, and relevance to the clinical reality. Frontiers in Psychiatry, 11, 432. https://doi.org/10.3389/fpsyt.2020.00432
Liu, C. (2024). Addressing bias and inclusivity in AI-Driven Mental Health Care. Psychiatric News, 59(10). https://doi.org/10.1176/appi.pn.2024.10.10.21
Martinez-Martin, N., Wang, N., Greene, J. A., & Ethical Digital Health Working Group. (2018). Data protection and mental health apps: Understanding the privacy gaps. NPJ Digital Medicine, 1(1), 31. https://doi.org/10.2196/mental.9423
Nigatu, Y.T., Liu, Y. & Wang, J. (2016). External validation of the international risk prediction algorithm for major depressive episode in the US general population: the PredictD-US study. BMC Psychiatry, 16, 256. https://doi.org/10.1186/s12888-016-0971-x
Price, W. N., 2nd, & Cohen, I. G. (2019). Privacy in the age of medical big data. Nature Medicine, 25(1), 37–43. https://pmc.ncbi.nlm.nih.gov/articles/PMC6376961/
Poalelungi, D. G., Musat, C. L., Fulga, A., Neagu, M., Neagu, A. I., Piraianu, A. I., & Fulga, I. (2023). Advancing Patient Care: How Artificial Intelligence Is Transforming Healthcare. Journal of Personalized Medicine, 13(8), 1214. https://doi.org/10.3390/jpm13081214
Strawbridge, R., Young, A. H., & Cleare, A. J. (2017). Biomarkers for depression: Recent insights, current challenges and future prospects. Neuropsychiatric Disease and Treatment, 13, 1245–1262. https://doi.org/10.2147/NDT.S114542
Xiong, C., Schindler, S., Luo, J., Morris, J., Bateman, R., Holtzman, D., Cruchaga, C., Babulal, G., Henson, R., Benzinger, T., Bui, Q., Agboola, F., Grant, E., Emily, G., Moulder, K., Geldmacher, D., Clay, O., Roberson, E., Murchison, C., Wolk, D., … Shaw, L. (2024). Baseline levels and longitudinal rates of change in plasma Aβ42/40 among self-identified Black/African American and White individuals. Research Square, RS. 3. RS-3783571. https://pmc.ncbi.nlm.nih.gov/articles/PMC10802715/

Anaia Leilani Keali’i-Jolie, LMFT, PsyD
Correspondence: drjolie@welltherapy.healthcare

Jeni McCutcheon, PsyD, MSCP, ABPP
Board Certified in Police and Public Safety Psychology & Clinical Psychology
Chair of the ABPP Artificial Intelligence Task Force (AITF)
Correspondence: jenimccutcheon@aol.com