Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
How responsible use and strong regulation of AI can help us strengthen rather than undermine democracy.
HAI Executive Director Russell Wald and AI Index Lead Sha Sajadieh discuss the trends in the 2026 AI Index regarding the stark contrast between American public sentiment and Chinese public sentiment when asked their excitement levels for AI adoption.
HAI Executive Director Russell Wald and AI Index Lead Sha Sajadieh discuss the trends in the 2026 AI Index regarding the stark contrast between American public sentiment and Chinese public sentiment when asked their excitement levels for AI adoption.
This methodological paper presents the Global AI Vibrancy Tool, an interactive suite of visualizations designed to facilitate cross-country comparisons of AI vibrancy across countries, using indicators organized into pillars. The tool offers customizable features that enable users to conduct in-depth country-level comparisons and longitudinal analyses of AI-related metrics.
This methodological paper presents the Global AI Vibrancy Tool, an interactive suite of visualizations designed to facilitate cross-country comparisons of AI vibrancy across countries, using indicators organized into pillars. The tool offers customizable features that enable users to conduct in-depth country-level comparisons and longitudinal analyses of AI-related metrics.

This brief introduces a framework of eight techniques for approximating political neutrality in AI models.

This brief introduces a framework of eight techniques for approximating political neutrality in AI models.
"Countries pursue AI sovereignty with four main objectives in mind: cultural autonomy, national security, economic competitiveness, and regulatory oversight," says Juan N. Pava, Stanford HAI Research Fellow.
"Countries pursue AI sovereignty with four main objectives in mind: cultural autonomy, national security, economic competitiveness, and regulatory oversight," says Juan N. Pava, Stanford HAI Research Fellow.
Mounting evidence indicates that the artificial intelligence (AI) systems that rank our social media feeds bear nontrivial responsibility for amplifying partisan animosity: negative thoughts, feelings, and behaviors toward political out-groups. Can we design these AIs to consider democratic values such as mitigating partisan animosity as part of their objective functions? We introduce a method for translating established, vetted social scientific constructs into AI objective functions, which we term societal objective functions, and demonstrate the method with application to the political science construct of anti-democratic attitudes. Traditionally, we have lacked observable outcomes to use to train such models-however, the social sciences have developed survey instruments and qualitative codebooks for these constructs, and their precision facilitates translation into detailed prompts for large language models. We apply this method to create a democratic attitude model that estimates the extent to which a social media post promotes anti-democratic attitudes, and test this democratic attitude model across three studies. In Study 1, we first test the attitudinal and behavioral effectiveness of the intervention among US partisans (N=1,380) by manually annotating (alpha=.895) social media posts with anti-democratic attitude scores and testing several feed ranking conditions based on these scores. Removal (d=.20) and downranking feeds (d=.25) reduced participants' partisan animosity without compromising their experience and engagement. In Study 2, we scale up the manual labels by creating the democratic attitude model, finding strong agreement with manual labels (rho=.75). Finally, in Study 3, we replicate Study 1 using the democratic attitude model instead of manual labels to test its attitudinal and behavioral impact (N=558), and again find that the feed downranking using the societal objective function reduced partisan animosity (d=.25). This method presents a novel strategy to draw on social science theory and methods to mitigate societal harms in social media AIs.
Mounting evidence indicates that the artificial intelligence (AI) systems that rank our social media feeds bear nontrivial responsibility for amplifying partisan animosity: negative thoughts, feelings, and behaviors toward political out-groups. Can we design these AIs to consider democratic values such as mitigating partisan animosity as part of their objective functions? We introduce a method for translating established, vetted social scientific constructs into AI objective functions, which we term societal objective functions, and demonstrate the method with application to the political science construct of anti-democratic attitudes. Traditionally, we have lacked observable outcomes to use to train such models-however, the social sciences have developed survey instruments and qualitative codebooks for these constructs, and their precision facilitates translation into detailed prompts for large language models. We apply this method to create a democratic attitude model that estimates the extent to which a social media post promotes anti-democratic attitudes, and test this democratic attitude model across three studies. In Study 1, we first test the attitudinal and behavioral effectiveness of the intervention among US partisans (N=1,380) by manually annotating (alpha=.895) social media posts with anti-democratic attitude scores and testing several feed ranking conditions based on these scores. Removal (d=.20) and downranking feeds (d=.25) reduced participants' partisan animosity without compromising their experience and engagement. In Study 2, we scale up the manual labels by creating the democratic attitude model, finding strong agreement with manual labels (rho=.75). Finally, in Study 3, we replicate Study 1 using the democratic attitude model instead of manual labels to test its attitudinal and behavioral impact (N=558), and again find that the feed downranking using the societal objective function reduced partisan animosity (d=.25). This method presents a novel strategy to draw on social science theory and methods to mitigate societal harms in social media AIs.

In his new book, Shared Wisdom, the scholar outlines the limits of today’s political and social structures, which he considers caught in historical ruts, and discusses how AI might help to rebuild a flourishing community.
In his new book, Shared Wisdom, the scholar outlines the limits of today’s political and social structures, which he considers caught in historical ruts, and discusses how AI might help to rebuild a flourishing community.


This brief presents the findings of an experiment that measures how persuasive AI-generated propaganda is compared to foreign propaganda articles written by humans.
This brief presents the findings of an experiment that measures how persuasive AI-generated propaganda is compared to foreign propaganda articles written by humans.




Stanford HAI joined global leaders to discuss the balance between AI innovation and safety and explore future policy paths.
Stanford HAI joined global leaders to discuss the balance between AI innovation and safety and explore future policy paths.


This issue brief series examines how technology will impact public debate, affect the electoral process, and may even determine the election outcome.
This issue brief series examines how technology will impact public debate, affect the electoral process, and may even determine the election outcome.


Stanford HAI has built a major portfolio of education opportunities for state, federal, and international policy leaders to strengthen AI governance.
Stanford HAI has built a major portfolio of education opportunities for state, federal, and international policy leaders to strengthen AI governance.
