Intersectional Biases in Generative Language Models and Their Psychosocial Impacts
Event Details
Event Type
Location
Hybrid
Intersectional Biases in Generative Language Models and Their Psychosocial Impacts
Abstract:
The rapid emergence of generative AI technologies has been shaped by a wave of early excitement and hope for a broad range of use cases. Yet, the impacts of the latest models on historically marginalized communities is still relatively understudied, including the potential for sociotechnical harm.
In this session, the speakers present a line of research uncovering intersectional biases in generative language models when they are used for open-ended writing, drawing connections between their synthetic text outputs and known linguistic patterns that have psychosocial impacts for diverse learners in educational settings.
Speakers
Evan Shieh
Executive Director and AI Researcher, Young Data Scientists League
Faye-Marie Vassel
STEM Education, Equity, and Inclusion Postdoctoral Fellow
The official Twitter account of the Stanford Institute for Human-Centered AI, advancing AI research,
education, policy, and practice to improve the human condition.
If you need a disability-related accommodation, please contact: Madeleine Wright, Communications and Events Coordinator. Requests should be made at least a week before the event.