Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
AI is accelerating discovery in the sciences and fostering interdisciplinary breakthroughs.
Vanessa Parli, Stanford HAI Director of Research and AI Index Steering Committee member, notes that the 2025 AI Index reports flourishing and higher-quality academic research in AI.
Vanessa Parli, Stanford HAI Director of Research and AI Index Steering Committee member, notes that the 2025 AI Index reports flourishing and higher-quality academic research in AI.
Environmental, social, and governance risks pose a threat to economies and human well-being around the world. However, we have the power to build a sustainable planet. Recent developments in AI are helping us see issues that were hard to identify before. As machine vision helps us see our world, we are able to detect issues, track them, and create targeted interventions. In this brief, we examine innovations by Stanford researchers that use AI and ML techniques to shift our world from one that depletes resources to one that preserves them for the future. For example, we can now track methane emissions across our energy and food systems, opening an avenue for policy formation and enforcement through near real-time tracing. AI enables knowledge-to-action and will play a key role in measuring and effectively achieving environmental, social, and governance goals.
Environmental, social, and governance risks pose a threat to economies and human well-being around the world. However, we have the power to build a sustainable planet. Recent developments in AI are helping us see issues that were hard to identify before. As machine vision helps us see our world, we are able to detect issues, track them, and create targeted interventions. In this brief, we examine innovations by Stanford researchers that use AI and ML techniques to shift our world from one that depletes resources to one that preserves them for the future. For example, we can now track methane emissions across our energy and food systems, opening an avenue for policy formation and enforcement through near real-time tracing. AI enables knowledge-to-action and will play a key role in measuring and effectively achieving environmental, social, and governance goals.
We invited 11 sci-fi filmmakers and AI researchers to Stanford for Stories for the Future, a day-and-a-half experiment in fostering new narratives about AI. Researchers shared perspectives on AI and filmmakers reflected on the challenges of writing AI narratives. Together researcher-writer pairs transformed a research paper into a written scene. The challenge? Each scene had to include an AI manifestation, but could not be about the personhood of AI or AI as a threat. Read the results of this project.
We invited 11 sci-fi filmmakers and AI researchers to Stanford for Stories for the Future, a day-and-a-half experiment in fostering new narratives about AI. Researchers shared perspectives on AI and filmmakers reflected on the challenges of writing AI narratives. Together researcher-writer pairs transformed a research paper into a written scene. The challenge? Each scene had to include an AI manifestation, but could not be about the personhood of AI or AI as a threat. Read the results of this project.
The 2025 AI Index highlights key developments over the past year, including major gains in model performance, record levels of private investment, new regulatory action, and growing real-world adoption.
The 2025 AI Index highlights key developments over the past year, including major gains in model performance, record levels of private investment, new regulatory action, and growing real-world adoption.
This industry brief focuses on AI research in healthcare and life sciences, with particular attention to its implications in a post COVID-19 world. Stanford HAI synthesize the latest from Stanford faculty across drug discovery, telehealth, ambient intelligence, operational excellence, medical imaging, augmented intelligence, and data and privacy. Read to learn more about how the adoption of AI may transform these applications.
This industry brief focuses on AI research in healthcare and life sciences, with particular attention to its implications in a post COVID-19 world. Stanford HAI synthesize the latest from Stanford faculty across drug discovery, telehealth, ambient intelligence, operational excellence, medical imaging, augmented intelligence, and data and privacy. Read to learn more about how the adoption of AI may transform these applications.
Vanessa Parli, HAI Director of Research and AI Index Steering Committee member, speaks about the biggest takeaways from the 2025 AI Index Report.
Vanessa Parli, HAI Director of Research and AI Index Steering Committee member, speaks about the biggest takeaways from the 2025 AI Index Report.
Current societal trends reflect an increased mistrust in science and a lowered civic engagement that threaten to impair research that is foundational for ensuring public health and advancing health equity. One effective countermeasure to these trends lies in community-facing citizen science applications to increase public participation in scientific research, making this field an important target for artificial intelligence (AI) exploration. We highlight potentially promising citizen science AI applications that extend beyond individual use to the community level, including conversational large language models, text-to-image generative AI tools, descriptive analytics for analyzing integrated macro- and micro-level data, and predictive analytics. The novel adaptations of AI technologies for community-engaged participatory research also bring an array of potential risks. We highlight possible negative externalities and mitigations for some of the potential ethical and societal challenges in this field.
Current societal trends reflect an increased mistrust in science and a lowered civic engagement that threaten to impair research that is foundational for ensuring public health and advancing health equity. One effective countermeasure to these trends lies in community-facing citizen science applications to increase public participation in scientific research, making this field an important target for artificial intelligence (AI) exploration. We highlight potentially promising citizen science AI applications that extend beyond individual use to the community level, including conversational large language models, text-to-image generative AI tools, descriptive analytics for analyzing integrated macro- and micro-level data, and predictive analytics. The novel adaptations of AI technologies for community-engaged participatory research also bring an array of potential risks. We highlight possible negative externalities and mitigations for some of the potential ethical and societal challenges in this field.
"The AI Index equips policymakers, researchers, and the public with the data they need to make informed decisions — and to ensure AI is developed with human-centered values at its core," says Russell Wald, Executive Director of Stanford HAI and Steering Committee member of the AI Index.
"The AI Index equips policymakers, researchers, and the public with the data they need to make informed decisions — and to ensure AI is developed with human-centered values at its core," says Russell Wald, Executive Director of Stanford HAI and Steering Committee member of the AI Index.
Measuring the impact of online misinformation is challenging. Traditional measures, such as user views or shares on social media, are incomplete because not everyone who is exposed to misinformation is equally likely to believe it. To address this issue, we developed a method that combines survey data with observational Twitter data to probabilistically estimate the number of users both exposed to and likely to believe a specific news story. As a proof of concept, we applied this method to 139 viral news articles and find that although false news reaches an audience with diverse political views, users who are both exposed and receptive to believing false news tend to have more extreme ideologies. These receptive users are also more likely to encounter misinformation earlier than those who are unlikely to believe it. This mismatch between overall user exposure and receptive user exposure underscores the limitation of relying solely on exposure or interaction data to measure the impact of misinformation, as well as the challenge of implementing effective interventions. To demonstrate how our approach can address this challenge, we then conducted data-driven simulations of common interventions used by social media platforms. We find that these interventions are only modestly effective at reducing exposure among users likely to believe misinformation, and their effectiveness quickly diminishes unless implemented soon after misinformation’s initial spread. Our paper provides a more precise estimate of misinformation’s impact by focusing on the exposure of users likely to believe it, offering insights for effective mitigation strategies on social media.
Measuring the impact of online misinformation is challenging. Traditional measures, such as user views or shares on social media, are incomplete because not everyone who is exposed to misinformation is equally likely to believe it. To address this issue, we developed a method that combines survey data with observational Twitter data to probabilistically estimate the number of users both exposed to and likely to believe a specific news story. As a proof of concept, we applied this method to 139 viral news articles and find that although false news reaches an audience with diverse political views, users who are both exposed and receptive to believing false news tend to have more extreme ideologies. These receptive users are also more likely to encounter misinformation earlier than those who are unlikely to believe it. This mismatch between overall user exposure and receptive user exposure underscores the limitation of relying solely on exposure or interaction data to measure the impact of misinformation, as well as the challenge of implementing effective interventions. To demonstrate how our approach can address this challenge, we then conducted data-driven simulations of common interventions used by social media platforms. We find that these interventions are only modestly effective at reducing exposure among users likely to believe misinformation, and their effectiveness quickly diminishes unless implemented soon after misinformation’s initial spread. Our paper provides a more precise estimate of misinformation’s impact by focusing on the exposure of users likely to believe it, offering insights for effective mitigation strategies on social media.
Trained on a dataset that includes all known living species – and a few extinct ones – Evo 2 can predict the form and function of proteins in the DNA of all domains of life and run experiments in a fraction of the time it would take a traditional lab.
Trained on a dataset that includes all known living species – and a few extinct ones – Evo 2 can predict the form and function of proteins in the DNA of all domains of life and run experiments in a fraction of the time it would take a traditional lab.