liquam ullamcorper purus ante, vitae lobortis urna semper et. Nam elementum consectetur neque, sit amet condimentum orci aliquet sit amet. Aliquam imperdiet magna vel interdum blandit. Nam ac ligula vitae nibh interdum ultricies. Cras sollicitudin vestibulum ligula. Ut sollicitudin felis nec velit convallis ultrices.
AI-generated child sexual abuse material (AI CSAM) carries unique harms. Schools have a chance to proactively prepare their AI CSAM prevention and response strategies.

AI-generated child sexual abuse material (AI CSAM) carries unique harms. Schools have a chance to proactively prepare their AI CSAM prevention and response strategies.
“It connects back to my fear that the people with the fewest resources will be most affected by the downsides of AI,” says HAI Policy Fellow Riana Pfefferkorn in response to a viral AI-generated deepfake video.
“It connects back to my fear that the people with the fewest resources will be most affected by the downsides of AI,” says HAI Policy Fellow Riana Pfefferkorn in response to a viral AI-generated deepfake video.

We invited 11 sci-fi filmmakers and AI researchers to Stanford for Stories for the Future, a day-and-a-half experiment in fostering new narratives about AI. Researchers shared perspectives on AI and filmmakers reflected on the challenges of writing AI narratives. Together researcher-writer pairs transformed a research paper into a written scene. The challenge? Each scene had to include an AI manifestation, but could not be about the personhood of AI or AI as a threat. Read the results of this project.

We invited 11 sci-fi filmmakers and AI researchers to Stanford for Stories for the Future, a day-and-a-half experiment in fostering new narratives about AI. Researchers shared perspectives on AI and filmmakers reflected on the challenges of writing AI narratives. Together researcher-writer pairs transformed a research paper into a written scene. The challenge? Each scene had to include an AI manifestation, but could not be about the personhood of AI or AI as a threat. Read the results of this project.

This brief introduces a framework of eight techniques for approximating political neutrality in AI models.

This brief introduces a framework of eight techniques for approximating political neutrality in AI models.

Stanford researchers show that although top language models cannot yet accurately diagnose children’s speech disorders, fine-tuning and other approaches could well change the game.

Stanford researchers show that although top language models cannot yet accurately diagnose children’s speech disorders, fine-tuning and other approaches could well change the game.
HAI Co-Director James Landay speaks about how AI labs need to invest more in product design.
HAI Co-Director James Landay speaks about how AI labs need to invest more in product design.
Current societal trends reflect an increased mistrust in science and a lowered civic engagement that threaten to impair research that is foundational for ensuring public health and advancing health equity. One effective countermeasure to these trends lies in community-facing citizen science applications to increase public participation in scientific research, making this field an important target for artificial intelligence (AI) exploration. We highlight potentially promising citizen science AI applications that extend beyond individual use to the community level, including conversational large language models, text-to-image generative AI tools, descriptive analytics for analyzing integrated macro- and micro-level data, and predictive analytics. The novel adaptations of AI technologies for community-engaged participatory research also bring an array of potential risks. We highlight possible negative externalities and mitigations for some of the potential ethical and societal challenges in this field.
Current societal trends reflect an increased mistrust in science and a lowered civic engagement that threaten to impair research that is foundational for ensuring public health and advancing health equity. One effective countermeasure to these trends lies in community-facing citizen science applications to increase public participation in scientific research, making this field an important target for artificial intelligence (AI) exploration. We highlight potentially promising citizen science AI applications that extend beyond individual use to the community level, including conversational large language models, text-to-image generative AI tools, descriptive analytics for analyzing integrated macro- and micro-level data, and predictive analytics. The novel adaptations of AI technologies for community-engaged participatory research also bring an array of potential risks. We highlight possible negative externalities and mitigations for some of the potential ethical and societal challenges in this field.

This brief evaluates the impact of authorship labels on the persuasiveness of AI-written policy messages.
This brief evaluates the impact of authorship labels on the persuasiveness of AI-written policy messages.

.png&w=256&q=80)
.png&w=256&q=100)

From intake forms to ambient scribes, artificial intelligence is transforming your medical visits. A Stanford expert explains the questions every patient should ask.
From intake forms to ambient scribes, artificial intelligence is transforming your medical visits. A Stanford expert explains the questions every patient should ask.

Interventions on model-internal states are fundamental operations in many areas of AI, including model editing, steering, robustness, and interpretability. To facilitate such research, we introduce pyvene, an open-source Python library that supports customizable interventions on a range of different PyTorch modules. pyvene supports complex intervention schemes with an intuitive configuration format, and its interventions can be static or include trainable parameters. We show how pyvene provides a unified and extensible framework for performing interventions on neural models and sharing the intervened upon models with others. We illustrate the power of the library via interpretability analyses using causal abstraction and knowledge localization. We publish our library through Python Package Index (PyPI) and provide code, documentation, and tutorials at ‘https://github.com/stanfordnlp/pyvene‘.
Interventions on model-internal states are fundamental operations in many areas of AI, including model editing, steering, robustness, and interpretability. To facilitate such research, we introduce pyvene, an open-source Python library that supports customizable interventions on a range of different PyTorch modules. pyvene supports complex intervention schemes with an intuitive configuration format, and its interventions can be static or include trainable parameters. We show how pyvene provides a unified and extensible framework for performing interventions on neural models and sharing the intervened upon models with others. We illustrate the power of the library via interpretability analyses using causal abstraction and knowledge localization. We publish our library through Python Package Index (PyPI) and provide code, documentation, and tutorials at ‘https://github.com/stanfordnlp/pyvene‘.