Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
A Psychiatrist’s Perspective on Social Media Algorithms and Mental Health | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

A Psychiatrist’s Perspective on Social Media Algorithms and Mental Health

Date
September 14, 2021
Topics
Healthcare

Considering social media’s growing impact, how can we create empathetic design frameworks to improve compassion online?

As of 2021, there are over 3.78 billion social media users worldwide, with each person averaging 145 minutes of social media use per day. And in those hours spent online, we’re beginning to see the harmful impact on mental health: loneliness, anxiety, fear of missing out, social comparison, and depression. 

Social media has undoubtedly integrated itself into society, but the question remains on how to properly negotiate our relationship with it. Nina Vasan, clinical assistant professor of psychiatry at Stanford and founder and executive director at Brainstorm: The Stanford Lab for Mental Health Innovation, and Sara Johansen, resident psychiatrist at Stanford and director of clinical innovation at Stanford Brainstorm, explored possible answers to that question during a Stanford Institute for Human-Centered AI seminar by outlining the impact of social media on mental health and psychological underpinnings of social media addiction, as well as possible opportunities to mitigate risk and promote wellbeing. Dr. Vasan and Dr. Johansen have worked with platforms such as Pinterest and TikTok to design and implement more empathic user experiences. 

What makes social media so addictive?

Variably rewarding users with stimuli (likes, notifications, comments, etc.) keeps them engaged with content. When a user’s photo receives a “like,” the same dopamine pathways involved in motivation, reward, and addiction are activated. What keeps us hooked on social media isn’t just the “pleasure rush of the like,” says Johansen, “it’s the intermittent absence of the like that keeps us engaged.” 

When does it become harmful?

One result of trapping users into endless scrolling loops is that it can lead to social comparison. When presented with the curated feeds of other people, we are vulnerable to “frequent and extreme upward social comparison,” which can lead to a number of negative side-effects such as erosion of self-esteem, depressed mood, and decreased life satisfaction. Some people try to cope with an eroded self-esteem by attacking other people’s sense of self, which can lead to cyber-bullying. 

Additionally, with advances in face tracking, facial recognition, and facial augmentation using AI, image-based apps have created questionable filters including ones designed to make a user appear more slender, which could contribute to distortions in body image. These platforms also offer “easy access to a community of people who promote and encourage disordered eating behavior,” says Vasan.

What are we doing now?

To moderate the vitriol of cyber-bullying, many companies have turned to AI as a method for classifying comments with negative sentiment and filtering them or prompting commenters to pause and reconsider their actions. 

Social media platforms are now working to ban communities that post harmful content. Many apps such as TikTok and Pinterest will present information on hotlines and support resources as a response to search queries for self-harm, suicide, depression, and eating disorder-related content. Moderation is still a complicated task as users find new ways to evade search filters, notes Vasan.

The psychiatrists don’t conclude that people must abstain completely from online platforms. For many of us, social media can be a rewarding experience that connects us with people all around the world. Instead of approaching screen time through the “displacement hypothesis,” which suggests the negative impact of technology is directly related to exposure, they recommend the “Goldilocks” hypothesis, which identifies moderate use as optimal for wellbeing.

On social media platforms, most risk mitigation methods are focused on non-maleficence, based on the principle to do no harm. Vasan and Johansen suggest that we should also consider beneficence, which is to do good. For example, Brainstorm’s recent work with Pinterest led to Pinterest Compassionate Search, which offers free therapeutic exercises on the platform in response to depression-related search terms.

What’s next?

Both psychiatrists emphasized a need for more social media-specific research, with even more granularity with respect to individual apps and not just smartphone use as a whole. 

They also recommend app makers consider more than the most simplistic business incentives. As we shift from “minimizing harm to promoting wellbeing,” Johansen says, it is important to realize that the friction associated with making apps less addictive “is going to come at a loss of some growth.” In the end it comes down to choosing that option because “it’s the ethical thing to do, because we have a responsibility to help these young minds develop in a healthy way.”

Drs. Vasan and Johansen consult for TikTok. Dr. Vasan has also consulted for Pinterest and Instagram.

Share
Link copied to clipboard!
Contributor(s)
Tammy Qiu
Related
  • What Twitter Reveals About COVID-19’s Impact on Our Mental Health
    Shana Lynch
    Apr 02
    news
    Your browser does not support the video tag.

    Mental health care must scale fast to handle this epidemic, says one researcher.

Related News

Exploring the Dangers of AI in Mental Health Care
Sarah Wells
Jun 11, 2025
News
Young woman holds up phone to her face

A new Stanford study reveals that AI therapy chatbots may not only lack effectiveness compared to human therapists but could also contribute to harmful stigma and dangerous responses.

News
Young woman holds up phone to her face

Exploring the Dangers of AI in Mental Health Care

Sarah Wells
HealthcareGenerative AIJun 11

A new Stanford study reveals that AI therapy chatbots may not only lack effectiveness compared to human therapists but could also contribute to harmful stigma and dangerous responses.

Ambient Intelligence, Human Impact
May 07, 2025
News

Health care providers struggle to catch early signals of cognitive decline. AI and computational neuroscientist Ehsan Adeli’s innovative computer vision tools may offer a solution.

News

Ambient Intelligence, Human Impact

HealthcareComputer VisionMay 07

Health care providers struggle to catch early signals of cognitive decline. AI and computational neuroscientist Ehsan Adeli’s innovative computer vision tools may offer a solution.

MedArena: Comparing LLMs for Medicine in the Wild
Eric Wu, Kevin Wu, James Zou
Apr 24, 2025
News

Stanford scholars leverage physicians to evaluate 11 large language models in real-world settings.

News

MedArena: Comparing LLMs for Medicine in the Wild

Eric Wu, Kevin Wu, James Zou
HealthcareNatural Language ProcessingGenerative AIApr 24

Stanford scholars leverage physicians to evaluate 11 large language models in real-world settings.