Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Generative AI | Stanford HAI

Generative AI

liquam ullamcorper purus ante, vitae lobortis urna semper et. Nam elementum consectetur neque, sit amet condimentum orci aliquet sit amet. Aliquam imperdiet magna vel interdum blandit. Nam ac ligula vitae nibh interdum ultricies. Cras sollicitudin vestibulum ligula. Ut sollicitudin felis nec velit convallis ultrices.

Riana Pfefferkorn | Student Misuse of AI-Powered “Undress” Apps
SeminarDec 03, 202512:00 PM - 1:15 PM
December
03
2025

AI-generated child sexual abuse material (AI CSAM) carries unique harms. Schools have a chance to proactively prepare their AI CSAM prevention and response strategies.


Seminar

Riana Pfefferkorn | Student Misuse of AI-Powered “Undress” Apps

Dec 03, 202512:00 PM - 1:15 PM

AI-generated child sexual abuse material (AI CSAM) carries unique harms. Schools have a chance to proactively prepare their AI CSAM prevention and response strategies.


Our Racist, Terrifying Deepfake Future Is Here
Nature
Nov 03, 2025
Media Mention

“It connects back to my fear that the people with the fewest resources will be most affected by the downsides of AI,” says HAI Policy Fellow Riana Pfefferkorn in response to a viral AI-generated deepfake video.

Media Mention
Your browser does not support the video tag.

Our Racist, Terrifying Deepfake Future Is Here

Nature
Generative AIRegulation, Policy, GovernanceLaw Enforcement and JusticeNov 03

“It connects back to my fear that the people with the fewest resources will be most affected by the downsides of AI,” says HAI Policy Fellow Riana Pfefferkorn in response to a viral AI-generated deepfake video.

Stories for the Future 2024
Isabelle Levent
Deep DiveMar 31, 2025
Research

We invited 11 sci-fi filmmakers and AI researchers to Stanford for Stories for the Future, a day-and-a-half experiment in fostering new narratives about AI. Researchers shared perspectives on AI and filmmakers reflected on the challenges of writing AI narratives. Together researcher-writer pairs transformed a research paper into a written scene. The challenge? Each scene had to include an AI manifestation, but could not be about the personhood of AI or AI as a threat. Read the results of this project.

Research

Stories for the Future 2024

Isabelle Levent
Machine LearningGenerative AIArts, HumanitiesCommunications, MediaDesign, Human-Computer InteractionSciences (Social, Health, Biological, Physical)Deep DiveMar 31

We invited 11 sci-fi filmmakers and AI researchers to Stanford for Stories for the Future, a day-and-a-half experiment in fostering new narratives about AI. Researchers shared perspectives on AI and filmmakers reflected on the challenges of writing AI narratives. Together researcher-writer pairs transformed a research paper into a written scene. The challenge? Each scene had to include an AI manifestation, but could not be about the personhood of AI or AI as a threat. Read the results of this project.

Toward Political Neutrality in AI
Jillian Fisher, Ruth E. Appel, Yulia Tsvetkov, Margaret E. Roberts, Jennifer Pan, Dawn Song, Yejin Choi
Quick ReadSep 10, 2025
Policy Brief

This brief introduces a framework of eight techniques for approximating political neutrality in AI models.

Policy Brief

Toward Political Neutrality in AI

Jillian Fisher, Ruth E. Appel, Yulia Tsvetkov, Margaret E. Roberts, Jennifer Pan, Dawn Song, Yejin Choi
DemocracyGenerative AIQuick ReadSep 10

This brief introduces a framework of eight techniques for approximating political neutrality in AI models.

David Nguyen
Person
Person

David Nguyen

Economy, MarketsWorkforce, LaborGenerative AIMar 03
Using AI to Streamline Speech and Language Services for Children
Katharine Miller
Oct 27, 2025
News
A child works with a speech pathologist to sound out words

Stanford researchers show that although top language models cannot yet accurately diagnose children’s speech disorders, fine-tuning and other approaches could well change the game.

News
A child works with a speech pathologist to sound out words

Using AI to Streamline Speech and Language Services for Children

Katharine Miller
Generative AIHealthcareOct 27

Stanford researchers show that although top language models cannot yet accurately diagnose children’s speech disorders, fine-tuning and other approaches could well change the game.

All Work Published on Generative AI

New AI Browsers Could Usher In A Web Where Agents Do Our Bidding—Eventually
Fast Company
Oct 23, 2025
Media Mention

HAI Co-Director James Landay speaks about how AI labs need to invest more in product design.

New AI Browsers Could Usher In A Web Where Agents Do Our Bidding—Eventually

Fast Company
Oct 23, 2025

HAI Co-Director James Landay speaks about how AI labs need to invest more in product design.

Design, Human-Computer Interaction
Generative AI
Media Mention
The Promise and Perils of Artificial Intelligence in Advancing Participatory Science and Health Equity in Public Health
Abby C King, Zakaria N Doueiri, Ankita Kaulberg, Lisa Goldman Rosas
Feb 14, 2025
Research
Your browser does not support the video tag.

Current societal trends reflect an increased mistrust in science and a lowered civic engagement that threaten to impair research that is foundational for ensuring public health and advancing health equity. One effective countermeasure to these trends lies in community-facing citizen science applications to increase public participation in scientific research, making this field an important target for artificial intelligence (AI) exploration. We highlight potentially promising citizen science AI applications that extend beyond individual use to the community level, including conversational large language models, text-to-image generative AI tools, descriptive analytics for analyzing integrated macro- and micro-level data, and predictive analytics. The novel adaptations of AI technologies for community-engaged participatory research also bring an array of potential risks. We highlight possible negative externalities and mitigations for some of the potential ethical and societal challenges in this field.

The Promise and Perils of Artificial Intelligence in Advancing Participatory Science and Health Equity in Public Health

Abby C King, Zakaria N Doueiri, Ankita Kaulberg, Lisa Goldman Rosas
Feb 14, 2025

Current societal trends reflect an increased mistrust in science and a lowered civic engagement that threaten to impair research that is foundational for ensuring public health and advancing health equity. One effective countermeasure to these trends lies in community-facing citizen science applications to increase public participation in scientific research, making this field an important target for artificial intelligence (AI) exploration. We highlight potentially promising citizen science AI applications that extend beyond individual use to the community level, including conversational large language models, text-to-image generative AI tools, descriptive analytics for analyzing integrated macro- and micro-level data, and predictive analytics. The novel adaptations of AI technologies for community-engaged participatory research also bring an array of potential risks. We highlight possible negative externalities and mitigations for some of the potential ethical and societal challenges in this field.

Foundation Models
Generative AI
Machine Learning
Natural Language Processing
Sciences (Social, Health, Biological, Physical)
Healthcare
Your browser does not support the video tag.
Research
Labeling AI-Generated Content May Not Change Its Persuasiveness
Isabel Gallegos, Dr. Chen Shani, Weiyan Shi, Federico Bianchi, Izzy Benjamin Gainsburg, Dan Jurafsky, Robb Willer
Quick ReadJul 30, 2025
Policy Brief

This brief evaluates the impact of authorship labels on the persuasiveness of AI-written policy messages.

Labeling AI-Generated Content May Not Change Its Persuasiveness

Isabel Gallegos, Dr. Chen Shani, Weiyan Shi, Federico Bianchi, Izzy Benjamin Gainsburg, Dan Jurafsky, Robb Willer
Quick ReadJul 30, 2025

This brief evaluates the impact of authorship labels on the persuasiveness of AI-written policy messages.

Generative AI
Regulation, Policy, Governance
Policy Brief
Percy Liang
Associate Professor of Computer Science, Stanford University | Director, Stanford Center for Research on Foundation Models | Senior Fellow, Stanford HAI
Person
Percy Liang

Percy Liang

Associate Professor of Computer Science, Stanford University | Director, Stanford Center for Research on Foundation Models | Senior Fellow, Stanford HAI
Foundation Models
Generative AI
Machine Learning
Natural Language Processing
Percy Liang
Person
How is AI Changing Your Doctor Visit?
Nikki Goth Itoi
Oct 20, 2025
News
a doctor talks to her patient while referring to a tablet

From intake forms to ambient scribes, artificial intelligence is transforming your medical visits. A Stanford expert explains the questions every patient should ask.

How is AI Changing Your Doctor Visit?

Nikki Goth Itoi
Oct 20, 2025

From intake forms to ambient scribes, artificial intelligence is transforming your medical visits. A Stanford expert explains the questions every patient should ask.

Healthcare
Generative AI
a doctor talks to her patient while referring to a tablet
News
pyvene: A Library for Understanding and Improving PyTorch Models via Interventions
Zhengxuan Wu, Atticus Geiger, Jing Huang, Noah Goodman, Christopher Potts, Aryaman Arora, Zheng Wang
Jun 01, 2024
Research
Your browser does not support the video tag.

Interventions on model-internal states are fundamental operations in many areas of AI, including model editing, steering, robustness, and interpretability. To facilitate such research, we introduce pyvene, an open-source Python library that supports customizable interventions on a range of different PyTorch modules. pyvene supports complex intervention schemes with an intuitive configuration format, and its interventions can be static or include trainable parameters. We show how pyvene provides a unified and extensible framework for performing interventions on neural models and sharing the intervened upon models with others. We illustrate the power of the library via interpretability analyses using causal abstraction and knowledge localization. We publish our library through Python Package Index (PyPI) and provide code, documentation, and tutorials at ‘https://github.com/stanfordnlp/pyvene‘.

pyvene: A Library for Understanding and Improving PyTorch Models via Interventions

Zhengxuan Wu, Atticus Geiger, Jing Huang, Noah Goodman, Christopher Potts, Aryaman Arora, Zheng Wang
Jun 01, 2024

Interventions on model-internal states are fundamental operations in many areas of AI, including model editing, steering, robustness, and interpretability. To facilitate such research, we introduce pyvene, an open-source Python library that supports customizable interventions on a range of different PyTorch modules. pyvene supports complex intervention schemes with an intuitive configuration format, and its interventions can be static or include trainable parameters. We show how pyvene provides a unified and extensible framework for performing interventions on neural models and sharing the intervened upon models with others. We illustrate the power of the library via interpretability analyses using causal abstraction and knowledge localization. We publish our library through Python Package Index (PyPI) and provide code, documentation, and tutorials at ‘https://github.com/stanfordnlp/pyvene‘.

Natural Language Processing
Generative AI
Machine Learning
Foundation Models
Your browser does not support the video tag.
Research
1
2
3
4
5