Skip to main content Skip to secondary navigation
Page Content

CRFM Names Advisory Board for the Foundation Model Transparency Index

Five leaders will serve as the Advisory Board for a new initiative to measure and improve transparency in the foundation model ecosystem.

The Stanford Center for Research on Foundation Models (CRFM) is pleased to announce a 5-member advisory board for the Foundation Model Transparency Index (FMTI). The multi-disciplinary board brings together Princeton computer scientist Arvind Narayanan, Stanford legal scholar Daniel E. Ho, Harvard philosopher Danielle Allen, MIT economist Daron Acemoglu, and is chaired by Human Intelligence CEO Rumman Chowdhury. Together, these leaders have pioneered advances across fields and are world experts on the societal impact of AI.

CRFM is an interdisciplinary initiative born out of the Stanford Institute for Human-Centered Artificial Intelligence (HAI) that aims to directly shape the responsible development and deployment of foundation models via community normsindustry standards, and public policy, including through efforts like the Foundation Model Transparency Index. In turn, the Index is a research initiative led by CRFM that aims to measure and improve transparency in the AI industry. The first version of the Index, launched in October 2023, scored 10 leading foundation model developers (e.g. OpenAI, Meta, Google, Anthropic) on 100 indicators of transparency. In short, the Index demonstrated the pervasive opacity that plagues the foundation model ecosystem: the average score was just 37 out of 100. The Foundation Model Transparency Index was covered by The AtlanticAxiosFortuneThe InformationThe New York TimesPoliticoRapplerReuters and Wired, among other outlets. As policymakers across the USEUCanada, and G7 consider disclosure requirements for foundation model developers, the Index is increasingly a canonical resource. 

The advisory board will work directly with the Index team, advising the design, execution, and presentation of subsequent iterations of the Index. Concretely, the Index team will meet regularly with the board to discuss key decision points: How is transparency best measured, how should companies disclose the relevant information publicly, how should scores be computed/presented, and how should findings be communicated to companies, policymakers, and the public? The Index aims to measure transparency to bring about greater transparency in the foundation model ecosystem: the board’s collective wisdom will guide the Index team in achieving these goals.

CRFM director Percy Liang says: “The initial release of the Transparency Index has shone a bright light on the status quo. The next challenge will be to maintain and evolve the Index as a trustworthy source as the foundation model ecosystem takes off, with the ultimate aim of improving the status quo.  We are excited and honored to have this illustrious, multidisciplinary board guide us through this next stage.” In that spirit, Acemoglu adds “I am excited to be part of the advisory board of FMTI because I am concerned that there is a general lack of knowledge about what generative AI models are doing, what data they are being trained on, to what extent this is infringing other people's property and creative rights, and how they are going to evolve. More transparency and accountability in the industry is a must to safeguard our future.”


Meet the New Board

 

Arvind Narayanan

Arvind Narayanan is a professor of computer science at Princeton University and the director of the Center for Information Technology Policy. He co-authored a textbook on fairness and machine learning and is currently co-authoring a book on AI snake oil. He led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. His work was among the first to show how machine learning reflects cultural stereotypes, and his doctoral research showed the fundamental limits of de-identification. Narayanan is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE).

Daniel E. HoDaniel E. Ho is the William Benjamin Scott and Luna M. Scott Professor of Law, professor of political science, professor of computer science (by courtesy), senior fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), senior fellow at the Stanford Institute for Economic Policy Research, and director of the Regulation, Evaluation, and Governance Lab (RegLab). Ho serves on the National Artificial Intelligence Advisory Committee (NAIAC), advising the White House on AI policy, as senior advisor on Responsible AI at the U.S. Department of Labor and as special advisor to the ABA Task Force on Law and Artificial Intelligence. His scholarship focuses on administrative law, regulatory policy, and antidiscrimination law. With the RegLab, his work has developed high-impact demonstration projects of data science and machine learning in public policy.

Danielle AllenDanielle Allen is James Bryant Conant University Professor at Harvard University. She is a professor of political philosophy, ethics, and public policy and director of the Democratic Knowledge Project and of the Allen Lab for Democracy Renovation. She is also a seasoned nonprofit leader, democracy advocate, national voice on AI and tech ethics, distinguished author, and mom. A past chair of the Mellon Foundation and Pulitzer Prize Board, and former Dean of Humanities at the University of Chicago, she is a member of the American Academy of Arts and Sciences and American Philosophical Society. Her many books include the widely acclaimed Talking to Strangers: Anxieties of Citizenship Since Brown v Board of EducationOur Declaration: A Reading of the Declaration of Independence in Defense of EqualityCuz: The Life and Times of Michael A.Democracy in the Time of Coronavirus; and Justice by Means of Democracy. She writes a column on constitutional democracy for the Washington Post. She is also a co-chair for the Our Common Purpose Commission and founder and president for Partners In Democracy, where she advocates for democracy reform to create greater voice and access in our democracy, and to drive progress toward a new social contract that serves and includes us all. 

Daron AcemogluDaron Acemoglu is an Institute Professor of Economics in the Department of Economics at the Massachusetts Institute of Technology and also affiliated with the National Bureau of Economic Research, and the Center for Economic Policy Research. His research covers a wide range of areas within economics, including political economy, economic development and growth, human capital theory, growth theory, innovation, search theory, network economics and learning. He is an elected fellow of the National Academy of Sciences, the British Academy, the American Philosophical Society, the Turkish Academy of Sciences, the American Academy of Arts and Sciences, the Econometric Society, the European Economic Association, and the Society of Labor Economists.

Rumman ChowdhuryRumman Chowdhury is the CEO and co-founder of Humane Intelligence, a tech nonprofit that creates methods of public evaluations of AI models, as well as a Responsible AI affiliate at Harvard’s Berkman Klein Center for Internet and Society. She is also a research affiliate at the Minderoo Center for Democracy and Technology at Cambridge University and a visiting researcher at the NYU Tandon School of Engineering. Previously, Dr. Chowdhury was the director of the META (ML Ethics, Transparency, and Accountability) team at Twitter, leading a team of applied researchers and engineers to identify and mitigate algorithmic harms on the platform. She was named one of BBC’s 100 Women, recognized as one of the Bay Area’s top 40 under 40, and a member of the British Royal Society of the Arts (RSA). She has also been named by Forbes as one of Five Who are Shaping AI.