Dan Ho, HAI Senior Fellow and director of the Stanford RegLab, discusses RegLab's AI model that analyzes decades of property records, helping to identify illegal racially restrictive language in housing documents.
Despite huge advancements in machine learning and neural networks, AI systems still depend on human direction. This article references HAI's 2022 conference where attendees were encouraged to rethink AI systems with a “human in the loop” and consider a future where people remain at the center of decision making.
Peter Norvig, Distinguished Education Fellow at the Stanford HAI, comments on how limiting the budget at an AI agent’s disposal as well as transaction times and capabilities can help AI agents “operate safely within defined boundaries."
A team of researchers from Stanford HAI, MIT, and Princeton created the Foundation Model Transparency Index, which rated the transparency of 10 AI companies; each one received a failing grade.
HAI Deputy Director Russell Wald and Computer Science Professor Sanmi Koyejo comment on HAI's recent paper “Exploring the Impact of AI on Black Americans: Considerations for the Congressional Black Caucus’s Policy Initiatives."
James Landay, Co-Founder of Stanford HAI, says disinformation, deepfake, discrimination and job displacement; of which not a lot has happened yet, are the real harms of AI.
HAI Co-Director Fei-Fei Li is recognized for her commitment to ethical AI and interdisciplinary research, continuing to shape the future of AI development and application.
This article gives an overview to the sectors and business processes that have been affected by AI tool adoption including healthcare, retail, hiring, and education, citing the AI Index's data report on a surge in fundraising for generative AI companies since 2022.