Skip to main content Skip to secondary navigation
Page Content

Speakers: Workshop on Sociotechnical AI Safety

Overview         November 16 Agenda           November 17 Agenda

Speakers: 

Shazeda Ahmed, Postdoctoral Researcher, UCLA

  • Shazeda Ahmed graduated with a Ph.D. from the University of California Berkeley School of Information. She is a current fellow in the Transatlantic Digital Debates at the Global Public Policy Institute. She was a pre-doctoral fellow at two Stanford University research centers, the Institute for Human-Centered Artificial Intelligence (HAI) and the Center for International Security and Cooperation (CISAC), and has previously worked as a researcher for Upturn, the Mercator Institute for China Studies, Ranking Digital Rights, and the Citizen Lab, and the AI Now Institute.

Shiri Dori-Hacohen, Assistant Professor, Department of Computer Science & Engineering, University of Connecticut; Director, Reducing Information Ecosystem Threats (RIET) Lab; Founder & Chair of the Board, AuCoDe

  • Dr. Shiri Dori-Hacohen is an Assistant Professor at the Department of Computer Science & Engineering at the University of Connecticut, where she leads the Reducing Information Ecosystem Threats (RIET) Lab. Dr. Dori-Hacohen’s research focuses on threats to the information ecosystem online and the sociotechnical AI alignment problem. She has served as PI or Co-PI on $7.7M worth of federal funds from the NSF. Her AI safety & ethics research was cited in the March 2023 AI Open Letter and has won the AI Risk Analysis Award at the NeurIPS ML Safety workshop and the Best Poster Award at ICLP Secure and Trustworthy AI workshop. She has been quoted and interviewed as an expert in multiple media outlets including Reuters, The Guardian, and Forbes. 

Mark Riedl, Professor, Georgia Tech School of Interactive Computing; Associate Director, Georgia Tech Machine Learning Center, Georgia Tech

  • Mark Riedl’s research focuses on human-centered artificial intelligence—the development of artificial intelligence and machine learning technologies that understand and interact with human users in more natural ways. Dr. Riedl’s recent work has focused on story understanding and generation, computational creativity, explainable AI, and teaching virtual agents to behave safely.

Marie-Therese Png, Student, Oll; Consultant, Google DeepMind; Social Impact Advisor, Cohere.ai

  • Marie-Therese Png is a British, Afro-Caribbean and Chinese-Singaporean individual whose DPhil Research lives at the intersections of technology justice, environmental justice and decoloniality. Her research bridges activism, policy, academia and industry. As a Part Time student at the OII, she is also a Consultant for Google DeepMind and social impact advisor for Cohere.ai. Her academic works include Decolonial Theory as Socio-technical Foresight in Artificial Intelligence Research, and Critical Roles of Global South Stakeholders in Artificial Intelligence Governance. With a background in ecology, she developed the Deep Sustainability AI program at the London School of Economics and continues to facilitate transnational alliance building between technology and environmental justice practitioners in Asia, Europe, Africa and Latin America. Previously, she was Technology Advisor in New York leading on the design and implementation of the UN Secretary General’s Digital Cooperation Office, with a special focus on strategic representation of low-middle income member states.

Irene Solaiman, Head, Global Policy, Hugging Face

  • Irene Solaiman is an AI safety and policy expert. She is Head of Global Policy at Hugging Face, where she is conducting social impact research and leading public policy. Irene serves on the Partnership on AI's Policy Steering Committee and the Center for Democracy and Technology's AI Governance Lab Advisory Committee. She is also Tech Ethics and Policy Mentor at Stanford University and an International Strategy Forum Fellow at Schmidt Futures. Irene advises responsible AI initiatives at OECD and IEEE. Her research includes AI value alignment, responsible releases, and combating misuse and malicious use. Irene was recently named and was named MIT Tech Review's 35 Innovators Under 35 2023 for her research. Irene formerly initiated and led bias and social impact research at OpenAI, where she also led public policy. Her research on adapting GPT-3 behavior received a spotlight at NeurIPS 2021. She also built AI policy at Zillow Group and advised policymakers on responsible autonomous decision-making and privacy as a fellow at Harvard’s Berkman Klein Center. 

Rishi Bommasani, Society Lead, Stanford Center on Foundation Models; Ph.D. Candidate of Computer Science, Stanford University

  • Rishi Bommasani researches the societal impact of AI, especially foundation models. He helped build and lead the Stanford Center for Research on Foundation Models (CRFM).  

Shalaleh Rismani, PhD Student, Electrical and Computer Engineering, McGill University

  • Shalaleh Rismani is currently pursuing her PhD in electrical and computer engineering at McGill University and Mila where she works on interdisciplinary research challenges with my colleagues at the Responsible Autonomy and Intelligent Systems Ethics (RAISE Lab). Her current research focuses on how we can characterize, identify and mitigate social and ethical failures of machine learning (ML) systems as early as possible in the ML development process. Moreover, she is interested in investigating how these kind of failures reveal themselves in human-ML interactions. Prior to starting my PhD, she co-founded Generation R consulting, a boutique AI ethics consultancy and was a design researcher with Open Roboethics Institute, on full time basis. 

Tegan Maharaj, Assistant Professor, Faculty of Information, University of Toronto

  • Tegan's goal in research is to contribute understanding and techniques to the growing science of responsible AI development, while usefully applying AI to high-impact ecological problems including climate change, epidemiology, AI alignment, and ecological impact assessments. Her recent research has two themes (1) using deep models for policy analysis and risk mitigation, and (2) designing data or unit test environments to empirically evaluate learning behaviour or simulate deployment of an AI system.

Dylan Hadfield-Mennell,  Bonnie and Marty (1964) Tenenbaum Career Development Assistant Professor, EECS, Massachusetts Institute of Technology

  • Dylan Hadfield-Mennell is the Bonnie and Marty (1964) Tenenbaum Career Development Assistant Professor of EECS on the faculty of Artificial Intelligence and Decision-Making and a Schmidt Futures AI2050 Early Career Fellow. He runs the Algorithmic Alignment Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT). His research focuses on the problem of agent alignment: the challenge of identifying behaviors that are consistent with the goals of another actor or group of actors. His research group works to identify solutions to alignment problems that arise from groups of AI systems, principal-agent pairs (i.e., human-robot teams), and societal oversight of ML systems.

Iason Gabriel, Research Scientist, Google DeepMind

  • Iason is a Staff Research Scientist at Google DeepMind where he helped found the Ethics Research Team. His work focuses on artificial intelligence, value alignment, human rights and democracy. Iason has published widely on these topics in venues such as the PNAS, Daedalus, Minds and Machines, Philosophy and TechnologyFAccT and NeurIPS. Prior to joining DeepMind, he taught philosophy at the University of Oxford and also worked for the United Nations Development Programme in Lebanon and Sudan. He holds a D.Phil in political philosophy from the University of Oxford, and has held visiting research positions at Harvard University, Princeton University, the World Bank and the University of California, Berkeley.

Nahema Marchal, Socio-technical Research Scientist, Google DeepMind

  • Nahema Marchal is a scholar of technology and Research Scientist in the Ethics & Society team at DeepMind. Throughout her career as a researcher and tech policy expert, she has worked closely with civil society groups, policy-makers and industry partners across the US, UK, EU, West Africa and Latin America on issues related to the governance and use of digital technologies, including online harms, content moderation and interference into democratic processes. Prior to joining academia, she has held positions in the media and non-profit sector. She is deeply passionate about advancing more equitable and regenerative futures through public interest technology, inclusive policy-making, and new forms of democratic governance.

Deep Ganguli, Research Scientist, Societal Impacts, Anthropic

  • Deep Ganguli is a research scientist at Anthropic focusing on the interpretability, fairness, transparency, and societal impacts of AI. Prior to joining Anthropic, he was director of research programs at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), as well as a science program officer at the Chan Zuckerberg Initiative where he developed grant making programs and convenings to foster interdisciplinary research at the intersection of artificial intelligence, data engineering, and cellular biology. He has led his own academic and industrial research in theoretical neuroscience, machine learning with humans in the loop, distributed computing, and biomedical data analysis. Deep has a PhD in computational neuroscience from New York University, and a BS in electrical engineering and computer science from the University of California at Berkeley.

Nathan Lambert, Research Scientist, Allen Institute for AI

  • Nathan Lambert is a Research Scientist at the Allen Institute for AI focusing on RLHF. Previously, he helped build an RLHF research team at HuggingFace. He received his PhD from the University of California, Berkeley working at the intersection of machine learning and robotics. He was advised by Professor Kristofer Pister in the Berkeley Autonomous Microsystems Lab and Roberto Calandra at Meta AI Research. He was lucky to intern at Facebook AI and DeepMind during his Ph.D. Nathan was was awarded the UC Berkeley EECS Demetri Angelakos Memorial Achievement Award for Altruism for his efforts to better community norms.

Alondra Nelson, Harold F. Linder Professor of Social Science, Institute for Advanced Study; Distinguished Senior Fellow, Center for American Progress

  • Alondra Nelson is the Harold F. Linder Professor of Social Science at the Institute for Advanced Study and a distinguished senior fellow at the Center for American Progress. A former deputy assistant to President Joe Biden, she served as principal deputy director for science and society and acting director of the Office of Science and Technology Policy. During her White House tenure, Nelson led the team that developed the landmark "Blueprint for an AI Bill of Rights," which became a cornerstone of the Biden-Harris administration’s AI policy strategy. She was the 14th president and CEO of the Social Science Research Council, and in this role developed a series of initiatives that brought research to bear on the impact of platforms on social relations and political culture. An acclaimed social scientist, Nelson has served on the faculty of Yale University as well as Columbia University, where she served as the inaugural Dean of Social Science. She writes and lectures widely on the intersections of science, technology, medicine, and social inequality. She is the author of several books including, most recently, The Social Life of DNA. Her essays, reviews, and commentary have been featured in national and international media outlets, including The New York TimesWashington Post, Wall Street JournalWired, and Science. She is a member of the American Academy of Arts and Sciences, the American Association for the Advancement of Science, the American Philosophical Society, the National Academy of Medicine, and the Council on Foreign Relations. Including her in the list of Ten People Who Shaped Science in 2022, Nature said of Nelson, “this social scientist made strides for equity, integrity and open access.” In 2023, she was named to TIME100's inaugural list of the most influential people in the field of AI.