Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Generative Frictions: A Conversation on AI | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Your browser does not support the video tag.
event

Generative Frictions: A Conversation on AI

Status
Past
Date
Friday, October 10, 2025 3:00 PM - 4:30 PM US/Pacific
Location
Gates Computer Science Building, 119
Topics
Generative AI
Design, Human-Computer Interaction
Government, Public Administration
Attend Virtually

A Conversation with Lucy Suchman and Terry Winograd on AI.

Share
Link copied to clipboard!
Event Contact
Nava Haghighi
navaha@stanford.edu

Related Events

Hari Subramonyam | Learning by Creating: A Human-Centered Vision for AI in Education
SeminarMar 11, 202612:00 PM - 1:15 PM
March
11
2026

The rapid growth of large language models has created new challenges for the Wikimedia Foundation in managing changing traffic patterns while upholding its free knowledge mission and fostering human engagement.

Seminar

Hari Subramonyam | Learning by Creating: A Human-Centered Vision for AI in Education

Mar 11, 202612:00 PM - 1:15 PM

The rapid growth of large language models has created new challenges for the Wikimedia Foundation in managing changing traffic patterns while upholding its free knowledge mission and fostering human engagement.

Entering a New Era of Human Civilization: A Discussion with Craig Mundie and Tom Friedman
LectureJan 20, 2026
January
20
2026

Join us for a fireside chat about AI's impact on the world with distinguished technology executive Craig Mundie and New York Times columnist Thomas Friedman. Former Stanford President John Hennessy will moderate the discussion. Welcome remarks will be provided by Colin Kahl, director of the Freeman Spogli Institute for International Studies.

Event

Entering a New Era of Human Civilization: A Discussion with Craig Mundie and Tom Friedman

Jan 20, 2026

Join us for a fireside chat about AI's impact on the world with distinguished technology executive Craig Mundie and New York Times columnist Thomas Friedman. Former Stanford President John Hennessy will moderate the discussion. Welcome remarks will be provided by Colin Kahl, director of the Freeman Spogli Institute for International Studies.

Adam Becker & Jon Evans | Book Talk: “More Everything Forever” and “Exadelic”
SeminarJan 21, 202612:00 PM - 1:15 PM
January
21
2026

Join HAI Policy Fellow Riana Pfefferkorn for a conversation about the potential future(s) of AI and humanity with Adam Becker, author of More Everything Forever, and Jon Evans, author of Exadelic and the Gradient Ascendant newsletter.

Seminar

Adam Becker & Jon Evans | Book Talk: “More Everything Forever” and “Exadelic”

Jan 21, 202612:00 PM - 1:15 PM

Join HAI Policy Fellow Riana Pfefferkorn for a conversation about the potential future(s) of AI and humanity with Adam Becker, author of More Everything Forever, and Jon Evans, author of Exadelic and the Gradient Ascendant newsletter.

This conversation brings together Lucy Suchman, Professor Emerita of the Anthropology of Science and Technology at Lancaster University, and Terry Winograd, Professor Emeritus in the Computer Science Department at Stanford University. Considering their influential body of work—individually, in dialogue, and in collaboration with one another—we will examine the relevance of this work in the context of today’s AI developments. We will revisit two historic encounters: First, the formation and development of Computer Professionals for Social Responsibility (CPSR) during an earlier moment of technological and political distress, and its significance for contemporary global and national politics. Second, the debate between Suchman and Winograd, not just in content, but emphasizing its generative reverberations in the CSCW community and the lessons such generative frictions offer as we navigate the challenges and possibilities of AI. Nava Haghighi, doctoral candidate in Computer Science at Stanford University, will moderate this conversation.

Panelists:

Lucy Suchman is Professor Emerita of the Anthropology of Science and Technology at Lancaster University in the UK. She was previously a principal scientist at Xerox’s Palo Alto Research Center (PARC), where she spent twenty years as a researcher. Her current research extends her longstanding critical engagement with the fields of artificial intelligence and human-computer interaction to the domain of contemporary militarism. Lucy was a founding member of Computer Professionals for Social Responsibility and served on its Board of Directors from 1982-1990 and is a current member of the International Committee for Robot Arms Control (ICRAC). She is the author of Human-Machine Reconfigurations (2007) and Plans and Situated Actions: the problem of human-machine communication (1987), both published by Cambridge University Press. Other recent publications include Suchman, L. (2023). Imaginaries of omniscience: Automating intelligence in the US Department of Defense. Social Studies of Science, 53(5), 761–786; and Suchman, L. (2020). Algorithmic Warfare and the Reinvention of Accuracy. Critical Studies on Security, 8(2), 175-187.

Terry Winograd is Professor Emeritus in the Computer Science Department at Stanford University. During his 40 years of teaching and research he created and directed the Human-Computer Interaction Group and the teaching and research program in Human-Computer Interaction Design at Stanford. He was a founding faculty member of the Hasso Plattner Institute of Design (the “d.school”). Winograd did pioneering research in artificial intelligence, in particular natural language understanding, during his PhD program at the MIT Artificial Intelligence Lab in the 1960s. He famously became disillusioned with the direction of mainstream AI research in the late 1970s due to its narrow focus on symbol manipulation while ignoring human social practices, embodiment, and context. His 1986 book with Fernando Flores, Understanding Computers and Cognition, marked a major departure in the philosophy underlying AI. He was a founding member and National President of Computer Professionals for Social Responsibility. He is a member of the ACM CHI Academy and an ACM Fellow. He received the 2011 CHI Lifetime Research Achievement Award. He is a Distinguished Fellow of the Stanford Institute for Human-Centered AI and is on the board of Corporate Accountability International. He has been a consultant to a number of companies, including Google, founded by Stanford students from his projects. Winograd has been married to Prof. Carol Hutner Winograd, MD for 57 years. They have daughters Shoshana and Avra and five grandsons.

Nava Haghighi is a PhD candidate in computer science at Stanford University, with a focus on human-centered AI. Her work examines the ways in which sociotechnical artifacts shape ontologies—the boundaries of what we allow ourselves to imagine—and how we might move toward centering ontological multiplicity. As a critical technical designer, she develops theories and methods for surfacing presumed ontological assumptions in current AI systems, and designs and builds systems toward expanding those assumptions. She has worked as a PhD researcher at Apple with the Human-centered Machine Learning and Body-sensing Intelligence groups, as a research resident at SPACE10, and as a designer with companies such as Lexus, Tesla, and other internationally recognized architecture and design firms. She co-founded Atolla, an AI skincare company that was acquired in 2021. Nava holds a dual master of science in computer science and integrated design and management from MIT, and a bachelor of architecture from California Polytechnic State University, San Luis Obispo.

This event is hosted by the Stanford Institute for Human-Centered AI (HAI), an interdisciplinary institute established in 2019 to advance AI research, education, policy, and practice.

Lucy Suchman
Professor Emerita of the Anthropology of Science and Technology at Lancaster University in the UK
Terry Winograd
Professor of Computer Science, Emeritus, Stanford
Moderator
Nava Haghighi
PhD candidate in computer science at Stanford University