Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News

This brief warns of the dangers of generative adversarial networks that can make realistic deepfakes, calling for comprehensive norms, regulations, and laws to counter AI-driven disinformation.
This brief warns of the dangers of generative adversarial networks that can make realistic deepfakes, calling for comprehensive norms, regulations, and laws to counter AI-driven disinformation.


Stanford’s Digital Economy Lab taps multidisciplinary group of thinkers to offer insights on AI and governance in volume called The Digitalist Papers.
Stanford’s Digital Economy Lab taps multidisciplinary group of thinkers to offer insights on AI and governance in volume called The Digitalist Papers.


Stanford HAI’s new Policy Fellow will study AI’s implications for privacy and safety, and explore how we can build rights-respecting artificial intelligence.
Stanford HAI’s new Policy Fellow will study AI’s implications for privacy and safety, and explore how we can build rights-respecting artificial intelligence.

James Landay, Co-Founder of Stanford HAI, says disinformation, deepfake, discrimination and job displacement; of which not a lot has happened yet, are the real harms of AI.
James Landay, Co-Founder of Stanford HAI, says disinformation, deepfake, discrimination and job displacement; of which not a lot has happened yet, are the real harms of AI.
CRFM Society Lead Rishi Bommasani comments on the lack of clarity on what has changed in the year since major AI companies adopted the White House's set of eight voluntary commitments on how to develop AI in a safe and trustworthy way.
CRFM Society Lead Rishi Bommasani comments on the lack of clarity on what has changed in the year since major AI companies adopted the White House's set of eight voluntary commitments on how to develop AI in a safe and trustworthy way.
Vanessa Parli, HAI Director of Research Programs, explains the importance of evaluation methods when it comes to AI benchmarking, noting the significance of assessing traits like "bias, toxicity, truthfulness, and other responsibility aspects."
Vanessa Parli, HAI Director of Research Programs, explains the importance of evaluation methods when it comes to AI benchmarking, noting the significance of assessing traits like "bias, toxicity, truthfulness, and other responsibility aspects."