Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs

What is Bias (in AI)?

Bias in AI occurs when a system produces results that favor or discriminate against certain groups of people. This typically happens because the training data reflects historical prejudices or doesn't represent all groups equally — for example, a hiring AI trained on past decisions might discriminate against women if the company historically hired mostly men. AI systems can also be biased due to how they're designed, what features they prioritize, or how success is measured, making it crucial to carefully examine both the data and goals when building these systems.

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News


Bias (in AI) mentioned at Stanford HAI

Explore Similar Terms:

Ethical AI | AI Alignment |Responsible AI

See Full List of Terms & Definitions

How Harmful Are AI’s Biases on Diverse Student Populations?
Prabha Kannan
Oct 03
news

Large language models exhibit alarming magnitudes of bias when generating stories about learners, often reinforcing harmful stereotypes 

How Harmful Are AI’s Biases on Diverse Student Populations?

Prabha Kannan
Oct 03

Large language models exhibit alarming magnitudes of bias when generating stories about learners, often reinforcing harmful stereotypes 

Education, Skills
Ethics, Equity, Inclusion
news
Assessing Political Bias in Language Models
Andrew Myers
May 22
news

Researchers develop a new tool to measure how well popular large language models align with public opinion to evaluate bias in chatbots.

Assessing Political Bias in Language Models

Andrew Myers
May 22

Researchers develop a new tool to measure how well popular large language models align with public opinion to evaluate bias in chatbots.

Natural Language Processing
Machine Learning
news
How Language Bias Persists in Scientific Publishing Despite AI Tools
Scott Hadly
Jun 16
news

Stanford researchers highlight the ongoing challenges of language discrimination in academic publishing, revealing that AI tools may not be the solution for non-native speakers.

How Language Bias Persists in Scientific Publishing Despite AI Tools

Scott Hadly
Jun 16

Stanford researchers highlight the ongoing challenges of language discrimination in academic publishing, revealing that AI tools may not be the solution for non-native speakers.

Ethics, Equity, Inclusion
Generative AI
news
How Bias Hides in ‘Kitchen Sink’ Approaches to Data
Julian Nyarko
Andrew Myers
May 30
news

In risk modeling, AI researchers take a more-is-better approach to training data, but a new study argues that a less-is-more approach may be preferable.

How Bias Hides in ‘Kitchen Sink’ Approaches to Data

Julian Nyarko
Andrew Myers
May 30

In risk modeling, AI researchers take a more-is-better approach to training data, but a new study argues that a less-is-more approach may be preferable.

Natural Language Processing
Machine Learning
news
Covert Racism in AI: How Language Models Are Reinforcing Outdated Stereotypes
Katharine Miller
Sep 03
news

Despite advancements in AI, new research reveals that large language models continue to perpetuate harmful racial biases, particularly against speakers of African American English. 

Covert Racism in AI: How Language Models Are Reinforcing Outdated Stereotypes

Katharine Miller
Sep 03

Despite advancements in AI, new research reveals that large language models continue to perpetuate harmful racial biases, particularly against speakers of African American English. 

news
Can AI Hold Consistent Values? Stanford Researchers Probe LLM Consistency and Bias
Andrew Myers
Nov 11
news

New research tests large language models for consistency across diverse topics, revealing that while they handle neutral topics reliably, controversial issues lead to varied answers.

Can AI Hold Consistent Values? Stanford Researchers Probe LLM Consistency and Bias

Andrew Myers
Nov 11

New research tests large language models for consistency across diverse topics, revealing that while they handle neutral topics reliably, controversial issues lead to varied answers.

Ethics, Equity, Inclusion
Natural Language Processing
Privacy, Safety, Security
news

Enroll in a Human-Centered AI Course

This HAI program covers technical fundamentals, business implications, and societal considerations.