Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Stanford HAI Selects 12 New Student Affinity Groups | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

newsAnnouncement

Stanford HAI Selects 12 New Student Affinity Groups

Date
November 20, 2023
Your browser does not support the video tag.

This year, affinity group topics include accessibility for individuals with disabilities, artistic creation, education, healthcare, journalism, workforce productivity, and more. 

The Stanford Institute for Human-Centered AI (HAI) announced the selection of twelve new student affinity groups for the 2023-24 academic year. 

These groups support HAI’s mission to advance AI research, education, policy, and practice with a core focus on enhancing the human condition. The program brings together students from across Stanford schools who share an interest in a topic related to human-centered AI. HAI provides a space for students to share ideas, develop intellectually, and strengthen the community of future leaders dedicated to building AI that benefits all of humanity.

This year, affinity group topics include accessibility for individuals with disabilities, artistic creation, education, healthcare, journalism, workforce productivity, and more. 

“The most exciting part of these affinity groups is seeing the creativity and ingenuity of the Stanford students,” says Vanessa Parli, HAI director of research programs. “They are all pursuing purposeful contributions towards human-centered AI, and we can’t wait to see the impact of our groups well into the future.”

A summary of each group’s topic area and impact goals are below. Detailed summaries, including the names of the faculty sponsors and student leaders, are available here.

A11Y AI is a space where people with disabilities at Stanford can have intentional conversations and develop strategic plans to ensure that emerging technologies, policies, and procedures around generative AI include the interests of people with disabilities. Specific sub-topics will consist of advocating for fair disability representation in data, articulating research directions for advances in AI that are grounded in the experiences of people with disabilities, and exploring how people with disabilities can make an impact on the Stanford community and AI community at large through careers/advocacy efforts in AI.

AI for Climate Resilience: Bridging Technology and Humanity focuses on harnessing AI to help communities become better prepared for and more resistant to climate change. This team is interdisciplinary across computer science, economics, ethics, design, and policy and fosters collaboration and knowledge exchange around AI-enabled climate solutions grounded in equity, collaboration, and sustainable impact.

AI for Healthcare ideates novel ways to use AI to improve healthcare accessibility and equity while reducing costs and improving outcomes. This group frames sessions through expert-led discussion sessions, design sprints, and case studies. 

Ambiguous Collaboration Working Group examines the challenge of understanding how humans should relate to machine collaborators in creative practices. They invite students working in and adjacent to music, theater, audio signal processing, computer graphics, virtual reality, generative models, and HCI to study how new forms of computation can shape their work. 

Augmenting Workforce Productivity using AI sees a problem of information overload in the workplace, so the group wants to explore how AI can be leveraged to empower employees, cut through the noise and busy work, and “maximize output per unit of human effort.”

Bridging the GAP investigates the human-centered governance of AI. Specifically, they engage with the stakeholders involved in AI governance and seek to explore parts of a governance toolkit (e.g., private and public regulations, funding, policies, laws, human rights doctrines, economic incentives, technical risk assessment measures, and enterprise software for governance). They aim to understand the technical challenges AI poses for governance, as well as compare and evaluate existing governance frameworks. 

Computational Journalism brings together diverse perspectives to explore how machine learning and artificial intelligence can be used responsibly for news production. Anyone impacted by the news is encouraged to participate to foster more discussions and perspectives beyond those directly involved in journalism.

Ethical and Effective Applications of AI in Education invites participants from computer science, education, law, psychology, and more to explore AI in education. They facilitate discussions with guest speakers grappling with current challenges related to education and AI and can share case studies from their experience. To enhance the discussions, group members have readings and materials to study beforehand. 

HAI BLaCK is based on a methodology centered around “Bias Limitation and Cultural Knowledge.” They welcome anyone seeking to foster a more inclusive and equitable technological landscape to join in critical dialogue, discourse, and discovery around illuminating the necessity of incorporating diverse cultural perspectives, experiences, norms, and knowledge into the algorithm design, development, deployment, and analysis processes. 

Social NLP focuses its discussions at the intersection of social sciences and AI, emphasizing foundation models and NLP. Examples include simulating human behaviors with foundation models, AI-driven persuasion, and information seeking in the foundation models era, to name a few. 

The Future of Embodied AI dives into responsibly enhancing the human condition with AI-enabled hardware systems. They host guest speakers from industry and government and various backgrounds, from robotics researchers to lawyers. Following each event, they explore critical questions, including the technical, ethical, legal, and moral questions of AI-enabled machines. They hope to publish an artifact of the research and recommend avenues for researchers and practitioners.

WellLabeled explores the ethical challenges of data annotation in AI development, particularly concerning toxic and harmful content. Discussions center on how to regulate annotators' exposure to distressing content, establish fair compensation mechanisms based on measured harm, and investigate validation methods through human-subject studies. Individuals interested in human-centered design, economics, and machine learning are especially encouraged to participate.

Learn more about the student affinity group program. 

Share
Link copied to clipboard!

Related News

Stanford Scholars Train Generative AI To Be Better Creative Collaborators
Nikki Goth Itoi
Mar 10, 2026
News
Skilled comic artist creating comic book on computer

The team is building a shared “conceptual grounding” so that artists can steer models with precision.

News
Skilled comic artist creating comic book on computer

Stanford Scholars Train Generative AI To Be Better Creative Collaborators

Nikki Goth Itoi
Mar 10

The team is building a shared “conceptual grounding” so that artists can steer models with precision.

What Your Phone Knows Could Help Scientists Understand Your Health
Katharine Miller
Mar 04, 2026
News
Woman using social media microblogging app on her smart phone

Stanford scientists have released an open-source platform that lets health researchers study the “screenome” – the digital traces of our daily lives – while protecting participants’ privacy.

News
Woman using social media microblogging app on her smart phone

What Your Phone Knows Could Help Scientists Understand Your Health

Katharine Miller
HealthcareMar 04

Stanford scientists have released an open-source platform that lets health researchers study the “screenome” – the digital traces of our daily lives – while protecting participants’ privacy.

How a HAI Seed Grant Helped Launch a Disease-Fighting AI Platform
Dylan Walsh
Mar 03, 2026
News

Stanford scientists in Senegal hunting for schistosomiasis—a parasitic disease infecting 200+ million people worldwide—used AI to transform local field work into satellite-powered disease mapping.

News

How a HAI Seed Grant Helped Launch a Disease-Fighting AI Platform

Dylan Walsh
Computer VisionHealthcareSciences (Social, Health, Biological, Physical)Machine LearningMar 03

Stanford scientists in Senegal hunting for schistosomiasis—a parasitic disease infecting 200+ million people worldwide—used AI to transform local field work into satellite-powered disease mapping.