Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
A Psychiatrist’s Perspective on Social Media Algorithms and Mental Health | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

news

A Psychiatrist’s Perspective on Social Media Algorithms and Mental Health

Date
September 14, 2021
Topics
Healthcare

Considering social media’s growing impact, how can we create empathetic design frameworks to improve compassion online?

As of 2021, there are over 3.78 billion social media users worldwide, with each person averaging 145 minutes of social media use per day. And in those hours spent online, we’re beginning to see the harmful impact on mental health: loneliness, anxiety, fear of missing out, social comparison, and depression. 

Social media has undoubtedly integrated itself into society, but the question remains on how to properly negotiate our relationship with it. Nina Vasan, clinical assistant professor of psychiatry at Stanford and founder and executive director at Brainstorm: The Stanford Lab for Mental Health Innovation, and Sara Johansen, resident psychiatrist at Stanford and director of clinical innovation at Stanford Brainstorm, explored possible answers to that question during a Stanford Institute for Human-Centered AI seminar by outlining the impact of social media on mental health and psychological underpinnings of social media addiction, as well as possible opportunities to mitigate risk and promote wellbeing. Dr. Vasan and Dr. Johansen have worked with platforms such as Pinterest and TikTok to design and implement more empathic user experiences. 

What makes social media so addictive?

Variably rewarding users with stimuli (likes, notifications, comments, etc.) keeps them engaged with content. When a user’s photo receives a “like,” the same dopamine pathways involved in motivation, reward, and addiction are activated. What keeps us hooked on social media isn’t just the “pleasure rush of the like,” says Johansen, “it’s the intermittent absence of the like that keeps us engaged.” 

When does it become harmful?

One result of trapping users into endless scrolling loops is that it can lead to social comparison. When presented with the curated feeds of other people, we are vulnerable to “frequent and extreme upward social comparison,” which can lead to a number of negative side-effects such as erosion of self-esteem, depressed mood, and decreased life satisfaction. Some people try to cope with an eroded self-esteem by attacking other people’s sense of self, which can lead to cyber-bullying. 

Additionally, with advances in face tracking, facial recognition, and facial augmentation using AI, image-based apps have created questionable filters including ones designed to make a user appear more slender, which could contribute to distortions in body image. These platforms also offer “easy access to a community of people who promote and encourage disordered eating behavior,” says Vasan.

What are we doing now?

To moderate the vitriol of cyber-bullying, many companies have turned to AI as a method for classifying comments with negative sentiment and filtering them or prompting commenters to pause and reconsider their actions. 

Social media platforms are now working to ban communities that post harmful content. Many apps such as TikTok and Pinterest will present information on hotlines and support resources as a response to search queries for self-harm, suicide, depression, and eating disorder-related content. Moderation is still a complicated task as users find new ways to evade search filters, notes Vasan.

The psychiatrists don’t conclude that people must abstain completely from online platforms. For many of us, social media can be a rewarding experience that connects us with people all around the world. Instead of approaching screen time through the “displacement hypothesis,” which suggests the negative impact of technology is directly related to exposure, they recommend the “Goldilocks” hypothesis, which identifies moderate use as optimal for wellbeing.

On social media platforms, most risk mitigation methods are focused on non-maleficence, based on the principle to do no harm. Vasan and Johansen suggest that we should also consider beneficence, which is to do good. For example, Brainstorm’s recent work with Pinterest led to Pinterest Compassionate Search, which offers free therapeutic exercises on the platform in response to depression-related search terms.

What’s next?

Both psychiatrists emphasized a need for more social media-specific research, with even more granularity with respect to individual apps and not just smartphone use as a whole. 

They also recommend app makers consider more than the most simplistic business incentives. As we shift from “minimizing harm to promoting wellbeing,” Johansen says, it is important to realize that the friction associated with making apps less addictive “is going to come at a loss of some growth.” In the end it comes down to choosing that option because “it’s the ethical thing to do, because we have a responsibility to help these young minds develop in a healthy way.”

Drs. Vasan and Johansen consult for TikTok. Dr. Vasan has also consulted for Pinterest and Instagram.

Share
Link copied to clipboard!
Contributor(s)
Tammy Qiu
Related
  • What Twitter Reveals About COVID-19’s Impact on Our Mental Health
    Shana Lynch
    Apr 02
    news
    Your browser does not support the video tag.

    Mental health care must scale fast to handle this epidemic, says one researcher.

Related News

What Your Phone Knows Could Help Scientists Understand Your Health
Katharine Miller
Mar 04, 2026
News
Woman using social media microblogging app on her smart phone

Stanford scientists have released an open-source platform that lets health researchers study the “screenome” – the digital traces of our daily lives – while protecting participants’ privacy.

News
Woman using social media microblogging app on her smart phone

What Your Phone Knows Could Help Scientists Understand Your Health

Katharine Miller
HealthcareMar 04

Stanford scientists have released an open-source platform that lets health researchers study the “screenome” – the digital traces of our daily lives – while protecting participants’ privacy.

How a HAI Seed Grant Helped Launch a Disease-Fighting AI Platform
Dylan Walsh
Mar 03, 2026
News

Stanford scientists in Senegal hunting for schistosomiasis—a parasitic disease infecting 200+ million people worldwide—used AI to transform local field work into satellite-powered disease mapping.

News

How a HAI Seed Grant Helped Launch a Disease-Fighting AI Platform

Dylan Walsh
Computer VisionHealthcareSciences (Social, Health, Biological, Physical)Machine LearningMar 03

Stanford scientists in Senegal hunting for schistosomiasis—a parasitic disease infecting 200+ million people worldwide—used AI to transform local field work into satellite-powered disease mapping.

From Privacy to ‘Glass Box’ AI, Stanford Students Are Targeting Real-World Problems
Nikki Goth Itoi
Feb 27, 2026
News

An Amazon-backed fellowship will support 10 Stanford PhD students whose work explores everything from how we communicate to understanding disease and protecting our data.

News

From Privacy to ‘Glass Box’ AI, Stanford Students Are Targeting Real-World Problems

Nikki Goth Itoi
Generative AIHealthcarePrivacy, Safety, SecurityComputer VisionSciences (Social, Health, Biological, Physical)Feb 27

An Amazon-backed fellowship will support 10 Stanford PhD students whose work explores everything from how we communicate to understanding disease and protecting our data.