Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Artist’s Intent: AI Recognizes Emotions in Visual Art | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

Artist’s Intent: AI Recognizes Emotions in Visual Art

Date
March 22, 2021
Topics
Arts, Humanities
Machine Learning
Piqsels

A team of AI researchers has trained its algorithms to see the emotional intent behind great works of art, possibly leading to computers that see much deeper than current technologies.

Experts in artificial intelligence have gotten quite good at creating computers that can “see” the world around them — recognizing objects, animals, and activities within their purview. These have become the foundational technologies for autonomous cars, planes, and security systems of the future.

But now a team of researchers is working to teach computers to recognize not just what objects are in an image, but how those images make people feel — i.e., algorithms with emotional intelligence.

“This ability will be key to making artificial intelligence not just more intelligent, but more human, so to speak,” says Panos Achlioptas, a doctoral candidate in computer science at Stanford University who worked with collaborators in France and Saudi Arabia.

To get to this goal, Achlioptas and his team collected a new dataset, called ArtEmis, which was recently published in an arXiv pre-print. The dataset is based on the 81,000 WIkiArt paintings and consists of 440,000 written responses from over 6,500 humans indicating how a painting makes them feel — and including explanations of why they chose a certain emotion. Using those responses, Achlioptas and team, headed by Stanford engineering professor Leonidas Guibas, trained neural speakers — AI that responds in written words — that allow computers to generate emotional responses to visual art and justify those emotions in language.

The researchers chose to use art specifically, as an artist’s goal is to elicit emotion in the viewer. ArtEmis works regardless of the subject matter, from still life to human portraits to abstraction.

The work is a new approach in computer vision, notes Guibas, a faculty member of the AI lab and the Stanford Institute for Human-Centered Artificial Intelligence. “Classical computer vision capturing work has been about literal content,” Guibas says. “There are three dogs in the image, or someone is drinking coffee from a cup. Instead, we needed descriptions that defined emotional content.”

Capturing Emotion

The algorithm categorizes the artist’s work into one of eight emotional categories — ranging from awe to amusement to fear to sadness — and then explains in written text what it is in the image that justifies the emotional read. (See examples below. All are paintings evaluated by the algorithm, but which were not used in the training exercises.)

Examples of how the algorithm identifies emotions in paintings.

“The computer is doing this,” says Achlioptas. “We can show it a new image it has never seen, and it will tell us how a human might feel.”

Remarkably, the researchers say, the captions accurately reflect the abstract content of the image in ways that go well beyond the capabilities of existing computer vision algorithms derived from documentary photographic datasets, such as Coco.

An example of how the algorithm determines complicated emotions in paintings.

What’s more, the algorithm does not simply capture the broad emotional experience of a complete image, but it can decipher differing emotions within a given painting. For instance, in the famous Rembrandt painting (above) of the beheading of John the Baptist, ArtEmis distinguishes not only the pain on John the Baptist’s severed head, but also the “contentment” on the face of Salome, the woman to whom the head is presented.

Achlioptas points out that, even while ArtEmis is sophisticated enough to gauge that an artist’s intent can be different within the context of a single image, the tool also accounts for subjectivity and variability of human response, as well.

“Not every person sees and feels the same thing seeing a work of art,” he adds. For instance, “I can feel happy upon seeing the Mona Lisa, but Professor Guibas might feel sad. ArtEmis can distinguish these differences.”

An Artist’s Instrument

In the near term, the researchers anticipate ArtEmis could become a tool for artists to evaluate their works during creation to ensure their work is having the desired impact.

“It could provide guidance and inspiration to ‘steer’ the artist’s work as desired,” Achlioptas says. A graphic artist working on a new logo might use ArtEmis to guarantee it is having the intended emotional effect, for example.

Down the road, after additional research and refinements, Achlioptas can foresee emotion-based algorithms helping bring emotional awareness to artificial intelligence applications such as chatbots and conversational AI agents.

“I see ArtEmis bringing insights from human psychology to artificial intelligence,” Achlioptas says. “I want to make AI more personal and to improve the human experience with it.”

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

Piqsels
Share
Link copied to clipboard!
Contributor(s)
Andrew Myers
Related
  • How AI and Art Hold Each Other Accountable
    Beth Jensen
    Aug 26
    news

    The arts have a major role to play in the fairness of our technological future.

Related News

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos
TIME
Jan 21, 2026
Media Mention

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Media Mention
Your browser does not support the video tag.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos

TIME
Ethics, Equity, InclusionGenerative AIMachine LearningNatural Language ProcessingJan 21

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Stanford’s Yejin Choi & Axios’ Ina Fried
Axios
Jan 19, 2026
Media Mention

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.

Media Mention
Your browser does not support the video tag.

Stanford’s Yejin Choi & Axios’ Ina Fried

Axios
Energy, EnvironmentMachine LearningGenerative AIEthics, Equity, InclusionJan 19

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.

Spatial Intelligence Is AI’s Next Frontier
TIME
Dec 11, 2025
Media Mention

"This is AI’s next frontier, and why 2025 was such a pivotal year," writes HAI Co-Director Fei-Fei Li.

Media Mention
Your browser does not support the video tag.

Spatial Intelligence Is AI’s Next Frontier

TIME
Computer VisionMachine LearningGenerative AIDec 11

"This is AI’s next frontier, and why 2025 was such a pivotal year," writes HAI Co-Director Fei-Fei Li.