Advancing AI Audit - Project Showcase and Lessons Learned
How can we effectively design and develop practical tools to assess the presence of bias and potential discrimination in AI systems?
Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
How can we effectively design and develop practical tools to assess the presence of bias and potential discrimination in AI systems?
The possibility that AI will automate most cognitive labor is worth taking seriously. How should we adapt to this transformation? I start from the perspective, articulated in the essay “AI as normal technology”, that the true bottlenecks lie downstream of capabilities and that AI’s impacts will unfold gradually over decades. If this is true, there are major gaps in our current evidence infrastructure, because it over-emphasizes the capability layer.
.png&w=1920&q=100)
The possibility that AI will automate most cognitive labor is worth taking seriously. How should we adapt to this transformation? I start from the perspective, articulated in the essay “AI as normal technology”, that the true bottlenecks lie downstream of capabilities and that AI’s impacts will unfold gradually over decades. If this is true, there are major gaps in our current evidence infrastructure, because it over-emphasizes the capability layer.
The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.

The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.
Strategic stability exists when neither side thinks it can improve its strategic outcome by striking first.

Strategic stability exists when neither side thinks it can improve its strategic outcome by striking first.
Last August, HAI and Cyber Policy Center launched the AI Audit Challenge, an initiative that invited teams from around the world to submit their models, solutions, and tools with the goal of improving our ability to evaluate AI systems. This event spotlights the four award-winning entries, featuring insightful discussions from participants and AI experts on effective practices and valuable lessons learned from the Challenge.
9:00a.m. - 9:05a.m. PDT
HAI International Policy Fellow; International Policy Director, Cyber Policy Center, Stanford University
Responsible AI Fellow, Berkman Klein Center for Internet & Society, Harvard University
9:05a.m. - 9:15a.m. PDT
Auditbot
Neal Lathia
Monzo Bank, UK
9:15a.m. - 9:25a.m. PDT
Ceteris Paribus
Edward Chen
Stanford University
9:25a.m. - 9:35a.m. PDT
End-User Audits
Michelle S. Lam
Stanford University
9:35a.m. - 9:45a.m. PDT
HateCheck
Paul Röttger
University of Oxford
Hannah Rose Kirk
University of Oxford
Bertie Vidgen
University of Oxford
9:45a.m. - 10:15a.m. PDT
Moderated panel discussion
HAI International Policy Fellow; International Policy Director, Cyber Policy Center, Stanford University
10:15a.m. - 11:15a.m. PDT
Moderated panel discussion
Founder and Principal Researcher, Montreal AI Ethics Institute
Schwartz Reisman Chair in Technology and Society, Professor of Law and Professor of Strategic Management. CIFAR AI Chair. Director, Schwartz Reisman Institute for Technology and Society
Staff Research Scientist, DeepMind's Ethics and Society Team
Raj & Neera Singh Assistant Professor, Computer & Information Science, University of Pennsylvania
Responsible AI Fellow, Berkman Klein Center for Internet & Society, Harvard University