Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Can Artificial Intelligence Map Our Moods? | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

Can Artificial Intelligence Map Our Moods?

Date
January 25, 2021
Topics
Healthcare
Natural Language Processing
Machine Learning
Gary Cassel | Pixabay

A Stanford researcher uses machine learning to identify mood swings through social media.

Researchers showed long ago that artificial intelligence models could identify a person’s basic psychological traits from their digital footprints in social media.

That may be just a start. A new study, co-authored by Stanford’s Johannes Eichstaedt and Aaron Weidman (University of Michigan), provides strong evidence that machine-learning models can also map a person’s mood swings and volatility from week to week.

Using natural language processing tools to analyze Facebook posts, the new machine-learning model infers both how happy or sad a person is feeling at any given time as well as how aroused or lackadaisical. Over time, this algorithm can even produce a video out of a person’s emotional ups and downs.

The findings could spark new worries about privacy or the use of social media to market to people. In theory, marketers or political propogandists could someday tailor their messages based on which message elicits the strongest emotional reaction.

But Eichstaedt, a Stanford Humanities & Sciences assistant professor of psychology and a faculty fellow at the Stanford Institute for Human-Centered Artificial Intelligence, says the approach could help diagnose people with mood disorders and see how well they respond to a medication, therapy, or a change in lifestyle.

“If this kind of approach is used ethically and legally, with strict privacy protection, we could someday have ways to computationally understand the mind,” Eichstaedt says. “It could help with diagnosis and pharmaceutical evaluation. It could also help us track the psychological impact of traumatic societal events, such as the COVID pandemic.”

For the moment, both the good and bad possibilities are still well in the future. For one thing, the results are preliminary, based on a small number of mostly American Facebook super-users who posted much more often than most people. As a result, the researchers caution, the results may not be representative of all Americans. They may be even less representative of people from other cultures.

That said, the researcher noted, the machine-learning program offered tantalizing evidence that it was on the right track. In fact, many of the mood patterns that it found were consistent with previous studies by other researchers that were based on people self-reporting their own feelings.

Training Machines To Track Feelings

Eichstaedt and Weidman began by having human research assistants annotate public Facebook postings of nearly 3,000 volunteers from an earlier study. The research assistants rated each post on its “valence” — how much it expressed positive or negative emotions — and on “arousal”— or the intensity of those feelings.

Once those ratings were complete, the posts were used to train a machine-learning model that would predict which kinds of language conveyed which kinds of feelings. Eichstaedt and Weidman then tested their model on an entirely different set of posts from 640 heavy Facebook users. People in this second group posted an average of 17 times a week over 28 weeks. This produced a (now public) dataset tracking emotional dynamics across 18,000 person-weeks — the largest dataset on weekly emotional dynamics ever compiled, which is available for mining by the research community.

Evaluating the Model

To get some sense of whether the machine-learning model was reading people right, Eichstaedt and Weidman looked at how well the patterns it revealed matched up with the predictions based on classical in-person psychological studies.

The results lined up with predictions based on a list of what psychology researchers call the “Big Five” personality traits — openness, agreeableness, extroversion, conscientiousness, and neuroticism. All the Facebook users in the study had volunteered to participate in a “My Personality” study, which measured the Big Five traits through a questionnaire. Consistent with the earlier predictions, people whom the machine-learning model rated higher on extroversion, agreeableness, and conscientiousness tended to feel both more upbeat and more aroused.

As it happened, the machine-learning results also neatly dovetailed with earlier studies about the relationship between how good people feel and how aroused they are at any given moment. Just as the earlier studies had theorized, the machine-learning results showed a lop-sided “V-shaped” relationship: Arousal goes up both as people feel more up and more down, but the relationship was stronger for the upbeat emotions; it’s hard to feel something very positive without also feeling upbeat.

Gender Discrepancies

The researchers also found that men and women showed somewhat different emotional patterns.

The women tended to be somewhat more upbeat than men and to have a wider emotional “resting point,” or typical level of pleasant and/or aroused feelings. Put another way, says Eichstaedt, men tend to be grumpier and less emotionally responsive to their environment than women. That’s consistent, says Eichstaedt, with the idea that women have higher “emotional flexibility.” 

Eichstaedt cautions that it’s too early to know whether machine learning could eventually provide the equivalent of an accurate MRI image for mood. But given all the data available on social media, he says, it could well open new opportunities for understanding human emotional dynamics at much larger scale.

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

Gary Cassel | Pixabay
Share
Link copied to clipboard!
Contributor(s)
Edmund L. Andrews
Related
  • Through AI and Text Analysis, Social Media Shows Our Community Well-being
    Melissa de Witte
    Apr 27
    news

    Stanford HAI junior fellow Johannes Eichstaedt built an algorithm that can provide, in principle, a real-time indication of community health.

Related News

AI Reveals How Brain Activity Unfolds Over Time
Andrew Myers
Jan 21, 2026
News
Medical Brain Scans on Multiple Computer Screens. Advanced Neuroimaging Technology Reveals Complex Neural Pathways, Display Showing CT Scan in a Modern Medical Environment

Stanford researchers have developed a deep learning model that transforms overwhelming brain data into clear trajectories, opening new possibilities for understanding thought, emotion, and neurological disease.

News
Medical Brain Scans on Multiple Computer Screens. Advanced Neuroimaging Technology Reveals Complex Neural Pathways, Display Showing CT Scan in a Modern Medical Environment

AI Reveals How Brain Activity Unfolds Over Time

Andrew Myers
HealthcareSciences (Social, Health, Biological, Physical)Jan 21

Stanford researchers have developed a deep learning model that transforms overwhelming brain data into clear trajectories, opening new possibilities for understanding thought, emotion, and neurological disease.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos
TIME
Jan 21, 2026
Media Mention

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Media Mention
Your browser does not support the video tag.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos

TIME
Ethics, Equity, InclusionGenerative AIMachine LearningNatural Language ProcessingJan 21

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Stanford’s Yejin Choi & Axios’ Ina Fried
Axios
Jan 19, 2026
Media Mention

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.

Media Mention
Your browser does not support the video tag.

Stanford’s Yejin Choi & Axios’ Ina Fried

Axios
Energy, EnvironmentMachine LearningGenerative AIEthics, Equity, InclusionJan 19

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.