Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • AI Glossary
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Back to Ethics, Equity, Inclusion

All Work Published on Ethics, Equity, Inclusion

Promoting Algorithmic Fairness in Clinical Risk Prediction
Stephen R. Pfohl, Agata Foryciarz, Nigam Shah
Quick ReadSep 09, 2022
Policy Brief

This brief examines the debate on algorithmic fairness in clinical predictive algorithms and recommends paths to safer, more equitable healthcare AI.

Promoting Algorithmic Fairness in Clinical Risk Prediction

Stephen R. Pfohl, Agata Foryciarz, Nigam Shah
Quick ReadSep 09, 2022

This brief examines the debate on algorithmic fairness in clinical predictive algorithms and recommends paths to safer, more equitable healthcare AI.

Healthcare
Machine Learning
Ethics, Equity, Inclusion
Policy Brief
How Congress Could Stifle The Onslaught Of AI-Generated Child Sexual Abuse Material
Tech Policy Press
Sep 25, 2025
Media Mention

HAI Policy Fellow Riana Pfefferkorn advises on ways in which the United States Congress could move the needle on model safety regarding AI-generated CSAM.


How Congress Could Stifle The Onslaught Of AI-Generated Child Sexual Abuse Material

Tech Policy Press
Sep 25, 2025

HAI Policy Fellow Riana Pfefferkorn advises on ways in which the United States Congress could move the needle on model safety regarding AI-generated CSAM.


Ethics, Equity, Inclusion
Privacy, Safety, Security
Regulation, Policy, Governance
Media Mention
Stanford HAI Artificial Intelligence Bill of Rights
Michele Elam, Rob Reich
Jan 01, 2022
Response to Request

Stanford scholars respond to a federal RFI regarding public and private sector uses of biometric technologies, proposing six principles for the responsible use of biometrics and AI.

Stanford HAI Artificial Intelligence Bill of Rights

Michele Elam, Rob Reich
Jan 01, 2022

Stanford scholars respond to a federal RFI regarding public and private sector uses of biometric technologies, proposing six principles for the responsible use of biometrics and AI.

Regulation, Policy, Governance
Ethics, Equity, Inclusion
Response to Request
How Do We Protect Children in the Age of AI?
Nikki Goth Itoi
Sep 08, 2025
News

Tools that enable teens to create deepfake nude images of each other are compromising child safety, and parents must get involved.

How Do We Protect Children in the Age of AI?

Nikki Goth Itoi
Sep 08, 2025

Tools that enable teens to create deepfake nude images of each other are compromising child safety, and parents must get involved.

Ethics, Equity, Inclusion
Privacy, Safety, Security
News
Risks of AI Race Detection in the Medical System
Matthew Lungren
Quick ReadDec 01, 2021
Policy Brief

This brief warns that AI systems that infer patients’ race in medical settings could deepen existing healthcare disparities.

Risks of AI Race Detection in the Medical System

Matthew Lungren
Quick ReadDec 01, 2021

This brief warns that AI systems that infer patients’ race in medical settings could deepen existing healthcare disparities.

Healthcare
Ethics, Equity, Inclusion
Policy Brief
When AI Imagines a Tree: How Your Chatbot’s Worldview Shapes Your Thinking
Katie Gray Garrison
Jul 28, 2025
News

A new study on generative AI argues that addressing biases requires a deeper exploration of ontological assumptions, challenging the way we define fundamental concepts like humanity and connection.

When AI Imagines a Tree: How Your Chatbot’s Worldview Shapes Your Thinking

Katie Gray Garrison
Jul 28, 2025

A new study on generative AI argues that addressing biases requires a deeper exploration of ontological assumptions, challenging the way we define fundamental concepts like humanity and connection.

Ethics, Equity, Inclusion
Generative AI
News
3
4
5
6
7