Ethics and Artificial Intelligence
Guiding & Building the Future of AI
Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. One main focus area is to consider the societal implications of these technologies. Below find recent research and discussions on the impact and ethics of artificial intelligence.
Featured HAI Research
Stanford launches an Ethics and Society Review Board that asks researchers to take an early look at the impact of their...
Bots could one day dispense medical advice, teach our children, or call to collect debt. How can we avoid being deceived...
This “severe” bias must be addressed before these language models become ingrained in real-world tasks.
Patient data from just three states trains most AI diagnostic tools.
Featured HAI Videos
- Directors' Conversations: Susan Liautaud and Corporate Ethics
- Renata Avila: Prototyping Feminist AI
- Kathleen Creel: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems
- Elizabeth Adams: The Path to Public Oversight of Surveillance Technology in Minneapolis
- Coded Bias: A Conversation with Director Shalini Kantayya
- Owning Ethics: Organizational Responsibility and Ethics in Silicon Valley
- Ethical Malice in Peer-Reviewed Machine Learning Literature
HAI Policy Briefs
Domain Shift & Emerging Questions in Facial Recognition Technology
Facial recognition technologies have grown in sophistication and adoption throughout American society. Significant anxieties around the technology have emerged—including privacy concerns, worries about surveillance in both public and private settings, and the perpetuation of racial bias.
Toward Fairness in Health Care Training Data
With recent advances in artificial intelligence (AI), researchers can now train sophisticated computer algorithms to interpret medical images – often with accuracy comparable to trained physicians. Yet our recent survey of medical research shows that these algorithms rely on datasets that lack population diversity and could introduce bias into the understanding of a patient’s health condition.