Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
A Blueprint for Using AI in Psychotherapy | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

A Blueprint for Using AI in Psychotherapy

Date
June 21, 2023
Topics
Healthcare
Natural Language Processing
Machine Learning

Scholars outline the best possible uses for these tools and the path to deployment.

Depression is the leading cause of disability worldwide. Anxiety disorder will affect almost one-third of U.S. adults during their lifetime. Problems of mental health are burdensome and ubiquitous.

And while it’s true that AI holds tremendous potential for improving the science and practice of psychotherapy, it remains a definitively high-stakes area. The goal is not simply to increase efficiency of treatment but also improve lives — and avoid outcomes as grave as suicide.

In a new working paper with seven co-authors who range in disciplinary background from psychology to computer science, Johannes Eichstaedt and Elizabeth (Betsy) Stade define the potential benefits and concerns of deploying AI in psychotherapy. The authors articulate their vision for how AI might be put to good use in this space. “We outline what rigorous and safe evaluation would look like,” says Stade, the paper’s lead author, a graduate student at the University of Pennsylvania and an incoming postdoc at Stanford. “This really needs to be done responsibly.”

The Value of AI in Psychotherapy

One of the clearest applications of AI in psychotherapy, and a place that should be amenable to technologies of the near future, is its use as a kind of supercharged secretary. Done right, AI can help clinicians with intake interviews, documentation, notes, and other basic tasks; it is a tool to make their lives easier.

“Important parts of the diagnosis and treatment pipeline can be cumbersome for both the therapist and the client, like symptom-tracking questionnaires or progress notes,” Stade says. “Handing these lower-level tasks and processes to automated systems could free up clinicians to do what they do best: careful differential diagnosis, treatment conceptualization, and big-picture insights.”

Patients stand to reap similar benefits from AI systems. Psychotherapy often involves tasks that are assigned to patients between sessions, like practice worksheets and activities to be completed at home. These may be designed, for example, to help a patient track her thoughts and feelings for discussion in her next therapy session. An AI system could make this process much more engaging and dynamic and, as a result, more effective.

Finally, AI could dramatically improve the scientific and experimental foundations behind different therapeutic approaches. For one, as chatbot technology improves, future bots could potentially support controlled trials with combinations of hundreds of distinct interventions across thousands or hundreds of thousands of patients — an impossibility if human therapists were needed to introduce and deliver each intervention. Beyond enabling such “super science,” AI is already being used to analyze transcripts of therapy sessions and determine whether interventions are being used properly.

“We know that psychotherapy works, but we also know it can work better,” Stade says. “If we’re able to use transcripts to track what actually happens in therapy, then link it to therapy outcomes, we can improve our clinical interventions.”

A Road to Responsible Development

Given these prospects, and given mental health is a $100 billion market, Eichstaedt fears companies will rush into this space advertising solutions without due diligence. He has already been contacted by venture capitalists who want to apply machine learning tools to the world of psychotherapy, who want to, as he put it, “throw an LLM [large language model] at the problem and see if it sticks.”

To combat this gold-rush mentality, the researchers propose a three-stage process, similar to autonomous vehicle development, for effectively and responsibly integrating AI into psychotherapy. In the first stage, the assistive stage, AI performs simple concrete tasks to support the therapist’s work. Next, in the collaborative stage, AI takes the lead in suggesting options for therapy, but humans tailor and make final decisions. Lastly, in the fully autonomous stage, an AI not only manages the whole clinical interaction with patients but takes care of things like billing and appointment scheduling, as well.

For Eichstaedt, it is essential that engineers and therapists don’t move from the first stage to the second until all of the problems have been unearthed and solved; the same holds for moving from the second stage to the third. This is an admittedly slow process, “more on the scale of decades than years,” he says.

The researchers also highlight the importance of transparency: Patients must know that they are talking to a bot, and they must be able to opt out if they would like to. The approval of these systems should follow something like the FDA drug approval process, with everything evaluated to ensure safety and efficacy.

The paper, which emerged from an ongoing effort within the World Well-Being Project — a multi-university consortium of computer scientists and psychologists — serves in some ways as an alarm to the broader community of psychologists. Eichstaedt notes that the attention he and his collaborators pay to the technological change underway is not necessarily representative of the field as a whole.

“We understand that this is coming, but this is not at all clear to many psychologists,” he says. “We need the clinical community to wake up and embrace responsibility of these technologies. It would be easy to dismiss how good they are, how quickly they bake themselves into pillars of society, until it’s too late.”

Paper authors include Shannon Wiltsey Stirman, Stanford associate professor of psychiatry and behavioral sciences; Robb Willer, Stanford professor of sociology; professors Lyle Ungar and Robert DeRubeis from the University of Pennsylvania; associate professor H. Andrew Schwartz from Stony Brook; assistant professor David Yaden from Johns Hopkins University; and assistant professor João Sedoc from New York University.

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.  

Share
Link copied to clipboard!
Contributor(s)
Dylan Walsh

Related News

Ambient Intelligence, Human Impact
May 07, 2025
News

Health care providers struggle to catch early signals of cognitive decline. AI and computational neuroscientist Ehsan Adeli’s innovative computer vision tools may offer a solution.

News

Ambient Intelligence, Human Impact

HealthcareComputer VisionMay 07

Health care providers struggle to catch early signals of cognitive decline. AI and computational neuroscientist Ehsan Adeli’s innovative computer vision tools may offer a solution.

MedArena: Comparing LLMs for Medicine in the Wild
Eric Wu, Kevin Wu, James Zou
Apr 24, 2025
News

Stanford scholars leverage physicians to evaluate 11 large language models in real-world settings.

News

MedArena: Comparing LLMs for Medicine in the Wild

Eric Wu, Kevin Wu, James Zou
HealthcareNatural Language ProcessingGenerative AIApr 24

Stanford scholars leverage physicians to evaluate 11 large language models in real-world settings.

AI in Science and Medicine: A Deep Dive from the AI Index Report
Hanae Armitage
Apr 15, 2025
News

HAI Associate Director Russ Altman discusses how artificial intelligence improves patient care and expands research capacity, as reported by the AI Index from the Institute for Human-Centered Artificial Intelligence.

News

AI in Science and Medicine: A Deep Dive from the AI Index Report

Hanae Armitage
HealthcareApr 15

HAI Associate Director Russ Altman discusses how artificial intelligence improves patient care and expands research capacity, as reported by the AI Index from the Institute for Human-Centered Artificial Intelligence.