Bias in AI occurs when a system produces results that favor or discriminate against certain groups of people. This typically happens because the training data reflects historical prejudices or doesn't represent all groups equally — for example, a hiring AI trained on past decisions might discriminate against women if the company historically hired mostly men. AI systems can also be biased due to how they're designed, what features they prioritize, or how success is measured, making it crucial to carefully examine both the data and goals when building these systems.
Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
Explore Similar Terms:

Large language models exhibit alarming magnitudes of bias when generating stories about learners, often reinforcing harmful stereotypes
Large language models exhibit alarming magnitudes of bias when generating stories about learners, often reinforcing harmful stereotypes


Researchers develop a new tool to measure how well popular large language models align with public opinion to evaluate bias in chatbots.
Researchers develop a new tool to measure how well popular large language models align with public opinion to evaluate bias in chatbots.


Stanford researchers highlight the ongoing challenges of language discrimination in academic publishing, revealing that AI tools may not be the solution for non-native speakers.
Stanford researchers highlight the ongoing challenges of language discrimination in academic publishing, revealing that AI tools may not be the solution for non-native speakers.


In risk modeling, AI researchers take a more-is-better approach to training data, but a new study argues that a less-is-more approach may be preferable.
In risk modeling, AI researchers take a more-is-better approach to training data, but a new study argues that a less-is-more approach may be preferable.


Despite advancements in AI, new research reveals that large language models continue to perpetuate harmful racial biases, particularly against speakers of African American English.
Despite advancements in AI, new research reveals that large language models continue to perpetuate harmful racial biases, particularly against speakers of African American English.


New research tests large language models for consistency across diverse topics, revealing that while they handle neutral topics reliably, controversial issues lead to varied answers.
New research tests large language models for consistency across diverse topics, revealing that while they handle neutral topics reliably, controversial issues lead to varied answers.
