Intersectional Biases in Generative Language Models and Their Psychosocial Impacts
HAI Seminar with Faye-Marie Vassel & Evan Shieh
Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
HAI Seminar with Faye-Marie Vassel & Evan Shieh
In an era when information is treated as a form of power and self-knowledge an unqualified good, the value of what remains unknown is often overlooked.
In an era when information is treated as a form of power and self-knowledge an unqualified good, the value of what remains unknown is often overlooked.
IBM Synthetic Data Sets (SDS) have been created for use cases in the financial industry.
IBM Synthetic Data Sets (SDS) have been created for use cases in the financial industry.
Abstract:
The rapid emergence of generative AI technologies has been shaped by a wave of early excitement and hope for a broad range of use cases. Yet, the impacts of the latest models on historically marginalized communities is still relatively understudied, including the potential for sociotechnical harm.
In this session, the speakers present a line of research uncovering intersectional biases in generative language models when they are used for open-ended writing, drawing connections between their synthetic text outputs and known linguistic patterns that have psychosocial impacts for diverse learners in educational settings.