Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Stanford HAI Selects 12 New Student Affinity Groups | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
newsAnnouncement

Stanford HAI Selects 12 New Student Affinity Groups

Date
November 20, 2023
Your browser does not support the video tag.

This year, affinity group topics include accessibility for individuals with disabilities, artistic creation, education, healthcare, journalism, workforce productivity, and more. 

The Stanford Institute for Human-Centered AI (HAI) announced the selection of twelve new student affinity groups for the 2023-24 academic year. 

These groups support HAI’s mission to advance AI research, education, policy, and practice with a core focus on enhancing the human condition. The program brings together students from across Stanford schools who share an interest in a topic related to human-centered AI. HAI provides a space for students to share ideas, develop intellectually, and strengthen the community of future leaders dedicated to building AI that benefits all of humanity.

This year, affinity group topics include accessibility for individuals with disabilities, artistic creation, education, healthcare, journalism, workforce productivity, and more. 

“The most exciting part of these affinity groups is seeing the creativity and ingenuity of the Stanford students,” says Vanessa Parli, HAI director of research programs. “They are all pursuing purposeful contributions towards human-centered AI, and we can’t wait to see the impact of our groups well into the future.”

A summary of each group’s topic area and impact goals are below. Detailed summaries, including the names of the faculty sponsors and student leaders, are available here.

A11Y AI is a space where people with disabilities at Stanford can have intentional conversations and develop strategic plans to ensure that emerging technologies, policies, and procedures around generative AI include the interests of people with disabilities. Specific sub-topics will consist of advocating for fair disability representation in data, articulating research directions for advances in AI that are grounded in the experiences of people with disabilities, and exploring how people with disabilities can make an impact on the Stanford community and AI community at large through careers/advocacy efforts in AI.

AI for Climate Resilience: Bridging Technology and Humanity focuses on harnessing AI to help communities become better prepared for and more resistant to climate change. This team is interdisciplinary across computer science, economics, ethics, design, and policy and fosters collaboration and knowledge exchange around AI-enabled climate solutions grounded in equity, collaboration, and sustainable impact.

AI for Healthcare ideates novel ways to use AI to improve healthcare accessibility and equity while reducing costs and improving outcomes. This group frames sessions through expert-led discussion sessions, design sprints, and case studies. 

Ambiguous Collaboration Working Group examines the challenge of understanding how humans should relate to machine collaborators in creative practices. They invite students working in and adjacent to music, theater, audio signal processing, computer graphics, virtual reality, generative models, and HCI to study how new forms of computation can shape their work. 

Augmenting Workforce Productivity using AI sees a problem of information overload in the workplace, so the group wants to explore how AI can be leveraged to empower employees, cut through the noise and busy work, and “maximize output per unit of human effort.”

Bridging the GAP investigates the human-centered governance of AI. Specifically, they engage with the stakeholders involved in AI governance and seek to explore parts of a governance toolkit (e.g., private and public regulations, funding, policies, laws, human rights doctrines, economic incentives, technical risk assessment measures, and enterprise software for governance). They aim to understand the technical challenges AI poses for governance, as well as compare and evaluate existing governance frameworks. 

Computational Journalism brings together diverse perspectives to explore how machine learning and artificial intelligence can be used responsibly for news production. Anyone impacted by the news is encouraged to participate to foster more discussions and perspectives beyond those directly involved in journalism.

Ethical and Effective Applications of AI in Education invites participants from computer science, education, law, psychology, and more to explore AI in education. They facilitate discussions with guest speakers grappling with current challenges related to education and AI and can share case studies from their experience. To enhance the discussions, group members have readings and materials to study beforehand. 

HAI BLaCK is based on a methodology centered around “Bias Limitation and Cultural Knowledge.” They welcome anyone seeking to foster a more inclusive and equitable technological landscape to join in critical dialogue, discourse, and discovery around illuminating the necessity of incorporating diverse cultural perspectives, experiences, norms, and knowledge into the algorithm design, development, deployment, and analysis processes. 

Social NLP focuses its discussions at the intersection of social sciences and AI, emphasizing foundation models and NLP. Examples include simulating human behaviors with foundation models, AI-driven persuasion, and information seeking in the foundation models era, to name a few. 

The Future of Embodied AI dives into responsibly enhancing the human condition with AI-enabled hardware systems. They host guest speakers from industry and government and various backgrounds, from robotics researchers to lawyers. Following each event, they explore critical questions, including the technical, ethical, legal, and moral questions of AI-enabled machines. They hope to publish an artifact of the research and recommend avenues for researchers and practitioners.

WellLabeled explores the ethical challenges of data annotation in AI development, particularly concerning toxic and harmful content. Discussions center on how to regulate annotators' exposure to distressing content, establish fair compensation mechanisms based on measured harm, and investigate validation methods through human-subject studies. Individuals interested in human-centered design, economics, and machine learning are especially encouraged to participate.

Learn more about the student affinity group program. 

Share
Link copied to clipboard!

Related News

A New Economic World Order May Be Based on Sovereign AI and Midsized Nation Alliances
Alex Pentland
Feb 06, 2026
News
close-up of a globe with pinpoints of lights coming out of all the countries

As trust in the old order erodes, mid-sized countries are building new agreements involving shared digital infrastructure and localized AI.

News
close-up of a globe with pinpoints of lights coming out of all the countries

A New Economic World Order May Be Based on Sovereign AI and Midsized Nation Alliances

Alex Pentland
Feb 06

As trust in the old order erodes, mid-sized countries are building new agreements involving shared digital infrastructure and localized AI.

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test
Andrew Myers
Feb 02, 2026
News
illustration of data and lines

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

News
illustration of data and lines

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test

Andrew Myers
Foundation ModelsGenerative AIPrivacy, Safety, SecurityFeb 02

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

What Davos Said About AI This Year
Shana Lynch
Jan 28, 2026
News
James Landay and Vanessa Parli

World leaders focused on ROI over hype this year, discussing sovereign AI, open ecosystems, and workplace change.

News
James Landay and Vanessa Parli

What Davos Said About AI This Year

Shana Lynch
Economy, MarketsJan 28

World leaders focused on ROI over hype this year, discussing sovereign AI, open ecosystems, and workplace change.