Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
AI-Enabled Depression Prediction Using Social Media | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

policyPolicy Brief

AI-Enabled Depression Prediction Using Social Media

Date
February 01, 2021
Topics
Healthcare
Read Paper
abstract

This brief introduces AI-enabled depression prediction through social media and calls for clear policy guidelines to ensure patient privacy.

Key Takeaways

  • AI-enabled depression prediction is capable of matching the accuracy of traditional screening surveys but can be delivered to whole (consenting) populations.

  • By examining social media language, our model can make a significant impact in recognizing the most widespread mental illnesses in the world.

  • Policymakers and regulators must establish clearer guidelines about access to data, understand the consequences of using algorithms to change social media posts into protected health information, and consider how depression detection can be combined with digital treatments in a modern system of care.

Executive Summary

Natural language processing for mental health monitoring is an emerging use of AI that is poised to disrupt the landscape of the health care industry. As the profusion of social media platforms allows for a wider swathe of the population to share their thoughts and feelings with the world, users’ posts and reactions extend the scope of medical screening methods for psychological disorders such as depression. Users are already being marketed to with sophistication based on these behaviors — why not leverage these technologies for public health?

To give some sense of scale for the unaddressed need, in the United States, between 7 and 26 percent of the population experiences depression each year, but only between 13 and 49 percent of those people receive treatment — this means that in the US, there currently may be 30+ million people in need of but not receiving mental healthcare. (Of note, these numbers are pre-COVID; early studies suggest that the prevalence of mental health conditions may have doubled after the first lockdowns). These high rates of underdiagnosis and undertreatment suggest that new screening methods like AI-enabled prediction are needed to identify and treat patients with depression.

In a recent article I co-authored in the Proceedings of the National Academy of Sciences, “Facebook language predicts depression in medical records,” my team and I specify a set of protocols to identify patients suffering from depression using only language from their Facebook posts. These methods capitalize on significant advances in technology over the last decade and are capable of roughly matching the accuracy of traditional screening surveys. Using a system that relies on machine learning to cluster, count, and score each word, we find that language predictors of depression include emotional, interpersonal, and cognitive processes represented by words such as sadness, a preoccupation with the self or rumination, and expressions of loneliness and hostility.

Depression assessment through social media represents a way to screen which does not require users to actively engage in a survey; it is unobtrusive for individuals who consent to be part of this modality. However, in order for this method to become feasible as a scalable complement to existing screening and monitoring procedures, policymakers and regulators will need to ensure that patient privacy and confidentiality are kept at the forefront of consideration when these technology solutions are developed. To that end, clearer guidelines and regulation are needed about access to data, who has access to the data, and the purpose for collecting this data. The application of machine learning to quasi-public social media posts can transform such data into protected health information and must be understood and treated as such — including with regards to questions of privacy and of the medical autonomy of patients.

If this advance in treatment is responsibly developed and is responsibly developed and introduced in a manner that integrates with existing systems of care and relationships with trusted providers, it has the potential to be a huge shift in public health.

Read Paper
Share
Link copied to clipboard!
Authors
  • Johannes Eichstaedt
    Johannes Eichstaedt

Related Publications

Toward Responsible AI in Health Insurance Decision-Making
Michelle Mello, Artem Trotsyuk, Abdoul Jalil Djiberou Mahamadou, Danton Char
Quick ReadFeb 10, 2026
Policy Brief

This brief proposes governance mechanisms for the growing use of AI in health insurance utilization review.

Policy Brief

Toward Responsible AI in Health Insurance Decision-Making

Michelle Mello, Artem Trotsyuk, Abdoul Jalil Djiberou Mahamadou, Danton Char
HealthcareRegulation, Policy, GovernanceQuick ReadFeb 10

This brief proposes governance mechanisms for the growing use of AI in health insurance utilization review.

Response to FDA's Request for Comment on AI-Enabled Medical Devices
Desmond C. Ong, Jared Moore, Nicole Martinez-Martin, Caroline Meinhardt, Eric Lin, William Agnew
Quick ReadDec 02, 2025
Response to Request

Stanford scholars respond to a federal RFC on evaluating AI-enabled medical devices, recommending policy interventions to help mitigate the harms of AI-powered chatbots used as therapists.

Response to Request

Response to FDA's Request for Comment on AI-Enabled Medical Devices

Desmond C. Ong, Jared Moore, Nicole Martinez-Martin, Caroline Meinhardt, Eric Lin, William Agnew
HealthcareRegulation, Policy, GovernanceQuick ReadDec 02

Stanford scholars respond to a federal RFC on evaluating AI-enabled medical devices, recommending policy interventions to help mitigate the harms of AI-powered chatbots used as therapists.

Russ Altman’s Testimony Before the U.S. Senate Committee on Health, Education, Labor, and Pensions
Russ Altman
Quick ReadOct 09, 2025
Testimony

In this testimony presented to the U.S. Senate Committee on Health, Education, Labor, and Pensions hearing titled “AI’s Potential to Support Patients, Workers, Children, and Families,” Russ Altman highlights opportunities for congressional support to make AI applications for patient care and drug discovery stronger, safer, and human-centered.

Testimony

Russ Altman’s Testimony Before the U.S. Senate Committee on Health, Education, Labor, and Pensions

Russ Altman
HealthcareRegulation, Policy, GovernanceSciences (Social, Health, Biological, Physical)Quick ReadOct 09

In this testimony presented to the U.S. Senate Committee on Health, Education, Labor, and Pensions hearing titled “AI’s Potential to Support Patients, Workers, Children, and Families,” Russ Altman highlights opportunities for congressional support to make AI applications for patient care and drug discovery stronger, safer, and human-centered.

Michelle M. Mello's Testimony Before the U.S. House Committee on Energy and Commerce Health Subcommittee
Michelle Mello
Quick ReadSep 02, 2025
Testimony

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Health hearing titled “Examining Opportunities to Advance American Health Care through the Use of Artificial Intelligence Technologies,” Michelle M. Mello calls for policy changes that will promote effective integration of AI tools into healthcare by strengthening trust.

Testimony

Michelle M. Mello's Testimony Before the U.S. House Committee on Energy and Commerce Health Subcommittee

Michelle Mello
HealthcareRegulation, Policy, GovernanceQuick ReadSep 02

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Health hearing titled “Examining Opportunities to Advance American Health Care through the Use of Artificial Intelligence Technologies,” Michelle M. Mello calls for policy changes that will promote effective integration of AI tools into healthcare by strengthening trust.