What is Human-Computer Interaction (HCI)? | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs

What is Human-Computer Interaction (HCI)?

Human-Computer Interaction (HCI) is the process through which people operate and engage with computer systems. It examines how users interact with technology, including the design of interfaces, user experience, and accessibility. The goal of HCI is to create systems that are intuitive, efficient, and enjoyable for people to use.

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News


HCI mentioned at Stanford HAI

Explore Similar Terms:

Human-Centered AI (HAI) | Chatbot | Explainable AI (XAI)

See Full List of Terms & Definitions

Enroll in a Human-Centered AI Course

This AI program covers technical fundamentals, business implications, and societal considerations.
How Culture Shapes What People Want from AI
Nikki Goth Itoi
Jul 29
news

Stanford researchers explore how to build culturally inclusive and equitable AI by offering initial empirical evidence on cultural variations in people’s ideal preferences about AI. 

How Culture Shapes What People Want from AI

Nikki Goth Itoi
Jul 29

Stanford researchers explore how to build culturally inclusive and equitable AI by offering initial empirical evidence on cultural variations in people’s ideal preferences about AI. 

Design, Human-Computer Interaction
news
James Landay: Paving a Path for Human-Centered Computing
James Landay
Katharine Miller
Aug 12
news

The Stanford HAI co-director has blazed a trail by keeping humans at the center of emerging technologies.

James Landay: Paving a Path for Human-Centered Computing

James Landay
Katharine Miller
Aug 12

The Stanford HAI co-director has blazed a trail by keeping humans at the center of emerging technologies.

Design, Human-Computer Interaction
news
Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising
Michelle Lam, Ayush Pandit, Colin H. Kalicki, Rachit Gupta, Poonam Sahoo, Danaë Metaxa
Oct 04
Research
Your browser does not support the video tag.

Algorithm audits are powerful tools for studying black-box systems without direct knowledge of their inner workings. While very effective in examining technical components, the method stops short of a sociotechnical frame, which would also consider users themselves as an integral and dynamic part of the system. Addressing this limitation, we propose the concept of sociotechnical auditing: auditing methods that evaluate algorithmic systems at the sociotechnical level, focusing on the interplay between algorithms and users as each impacts the other. Just as algorithm audits probe an algorithm with varied inputs and observe outputs, a sociotechnical audit (STA) additionally probes users, exposing them to different algorithmic behavior and measuring their resulting attitudes and behaviors. As an example of this method, we develop Intervenr, a platform for conducting browser-based, longitudinal sociotechnical audits with consenting, compensated participants. Intervenr investigates the algorithmic content users encounter online, and also coordinates systematic client-side interventions to understand how users change in response. As a case study, we deploy Intervenr in a two-week sociotechnical audit of online advertising (N = 244) to investigate the central premise that personalized ad targeting is more effective on users. In the first week, we observe and collect all browser ads delivered to users, and in the second, we deploy an ablation-style intervention that disrupts normal targeting by randomly pairing participants and swapping all their ads. We collect user-oriented metrics (self-reported ad interest and feeling of representation) and advertiser-oriented metrics (ad views, clicks, and recognition) throughout, along with a total of over 500,000 ads. Our STA finds that targeted ads indeed perform better with users, but also that users begin to acclimate to different ads in only a week, casting doubt on the primacy of personalized ad targeting given the impact of repeated exposure. In comparison with other evaluation methods that only study technical components, or only experiment on users, sociotechnical audits evaluate sociotechnical systems through the interplay of their technical and human components.

Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising

Michelle Lam, Ayush Pandit, Colin H. Kalicki, Rachit Gupta, Poonam Sahoo, Danaë Metaxa
Oct 04

Algorithm audits are powerful tools for studying black-box systems without direct knowledge of their inner workings. While very effective in examining technical components, the method stops short of a sociotechnical frame, which would also consider users themselves as an integral and dynamic part of the system. Addressing this limitation, we propose the concept of sociotechnical auditing: auditing methods that evaluate algorithmic systems at the sociotechnical level, focusing on the interplay between algorithms and users as each impacts the other. Just as algorithm audits probe an algorithm with varied inputs and observe outputs, a sociotechnical audit (STA) additionally probes users, exposing them to different algorithmic behavior and measuring their resulting attitudes and behaviors. As an example of this method, we develop Intervenr, a platform for conducting browser-based, longitudinal sociotechnical audits with consenting, compensated participants. Intervenr investigates the algorithmic content users encounter online, and also coordinates systematic client-side interventions to understand how users change in response. As a case study, we deploy Intervenr in a two-week sociotechnical audit of online advertising (N = 244) to investigate the central premise that personalized ad targeting is more effective on users. In the first week, we observe and collect all browser ads delivered to users, and in the second, we deploy an ablation-style intervention that disrupts normal targeting by randomly pairing participants and swapping all their ads. We collect user-oriented metrics (self-reported ad interest and feeling of representation) and advertiser-oriented metrics (ad views, clicks, and recognition) throughout, along with a total of over 500,000 ads. Our STA finds that targeted ads indeed perform better with users, but also that users begin to acclimate to different ads in only a week, casting doubt on the primacy of personalized ad targeting given the impact of repeated exposure. In comparison with other evaluation methods that only study technical components, or only experiment on users, sociotechnical audits evaluate sociotechnical systems through the interplay of their technical and human components.

Design, Human-Computer Interaction
Your browser does not support the video tag.
Research
Exploring the Complex Ethical Challenges of Data Annotation
Beth Jensen
Jul 10
news

A cross-disciplinary group of Stanford students examines the ethical challenges faced by data workers and the companies that employ them.

Exploring the Complex Ethical Challenges of Data Annotation

Beth Jensen
Jul 10

A cross-disciplinary group of Stanford students examines the ethical challenges faced by data workers and the companies that employ them.

Design, Human-Computer Interaction
Workforce, Labor
news
Stanford HAI Announces Hoffman-Yee Grants Recipients for 2024
Nikki Goth Itoi
Aug 21
announcement

Six interdisciplinary research teams received a total of $3 million to pursue groundbreaking ideas in the field of AI.

Stanford HAI Announces Hoffman-Yee Grants Recipients for 2024

Nikki Goth Itoi
Aug 21

Six interdisciplinary research teams received a total of $3 million to pursue groundbreaking ideas in the field of AI.

Design, Human-Computer Interaction
Healthcare
Natural Language Processing
Machine Learning
announcement
Stories for the Future 2024
Isabelle Levent
Deep DiveMar 31
Research

We invited 11 sci-fi filmmakers and AI researchers to Stanford for Stories for the Future, a day-and-a-half experiment in fostering new narratives about AI. Researchers shared perspectives on AI and filmmakers reflected on the challenges of writing AI narratives. Together researcher-writer pairs transformed a research paper into a written scene. The challenge? Each scene had to include an AI manifestation, but could not be about the personhood of AI or AI as a threat. Read the results of this project.

Stories for the Future 2024

Isabelle Levent
Deep DiveMar 31

We invited 11 sci-fi filmmakers and AI researchers to Stanford for Stories for the Future, a day-and-a-half experiment in fostering new narratives about AI. Researchers shared perspectives on AI and filmmakers reflected on the challenges of writing AI narratives. Together researcher-writer pairs transformed a research paper into a written scene. The challenge? Each scene had to include an AI manifestation, but could not be about the personhood of AI or AI as a threat. Read the results of this project.

Machine Learning
Generative AI
Arts, Humanities
Communications, Media
Design, Human-Computer Interaction
Sciences (Social, Health, Biological, Physical)
Research