Skip to main content Skip to secondary navigation
Page Content

Student Affinity Groups

Are you a Stanford student interested in creating meaningful interdisciplinary connections within the Stanford community? Do you have ideas on advancing AI to improve the human condition?

If so, you’re invited to apply to have HAI sponsor your affinity group. HAI Student Affinity Groups are small teams of interdisciplinary students (Stanford undergraduates, graduates, and postdocs) who have a shared interest in a topic related to the development or study of human-centered AI. Affinity Groups provide a space for students to share ideas, develop intellectually and strengthen the community of future leaders dedicated to building AI that benefits all of humanity.

The application period has ended; check back for the announcement of the winning teams.


Steps to Get Started

  1. Identify a topic of focus and gather an interdisciplinary group of students who share interest in that topic. If you have a topic idea but are looking for others to join your group, fill out this Interest Form. Responses can be found here.
  2. Identify two students from different disciplines to serve as the leads.
  3. Identify a faculty mentor; no formal time commitment is required of faculty. If you need support in reaching out to faculty, please contact HAI Research Associate Christine Raval.
  4. Submit the application form detailing the goals for your group and steps you’ll take to achieve those objectives.

Apply now


Benefits of Joining HAI Student Affinity Groups

  • Funding of up to $1,000 for the academic year to spend on small quarterly or biweekly gatherings, such as: workshopping lunches, student-hosted speakers, book discussions or discussion series.
  • Space (physical and intellectual) to share knowledge across disciplines and create collaborations.
  • Access to a community of researchers, faculty, and staff committed to promoting human-centered uses of AI, and ensuring that humanity benefits from the technology and that the benefits are broadly shared.
  • Inside scoop on HAI events, research, publications, and volunteer opportunities.

Guidelines

  • Applications due August 25, 2023.
  • Student Affinity Groups will be announced in late September and will run during the fall, winter, and spring quarters.
  • Each group must have students from two or more disciplines.
  • Each group must have one faculty mentor; no formal time commitment is required of faculty.
  • At the end of the Spring quarter, groups must submit a report summarizing the outcomes. If members elect to continue, they must reapply.

FAQs

  • How can funds be used? Possible expenses include food expenses, creating marketing materials, or to purchase other materials needed for the program (books, software, printing, etc).
  • Can people join mid-program? Yes, new members can join at any point during the academic year.
  • Is more funding available if larger project ideas are developed through the affinity groups? Promising research ideas that develop through the affinity group could make a great proposal for the HAI seed grant program.

2022-23 Academic Year

  • Our affinity group is focused on employing AI for solving problems linked to climate and environmental issues. Climate change is one of the biggest challenges faced in the 21st century and is a complex issue that requires diverse perspectives. Discussions will cover the science behind greenhouse gas, disastrous effects of climate change (wildfire, flooding, etc.), humanity’s role in mitigating this issue, and human-centered AI developments that can solve climate-related issues. Discussion topics will be moderated by affinity group leaders Wai Tong Chung, a PhD student and HAI Grad Fellow, and Greyson Assa, a Master's student at the Doerr School of Sustainability.

    Name

    Role

    School

    Department

    Wai Tong Chung

    Graduate Student Co-Lead

    School of Engineering

    Mechanical Engineering

    Greyson Assa

    Graduate Student Co-Lead

    School of Sustainability

    Sustainability Science and Practice

    David Wu

    Graduate Student

    School of Engineering

    Aeronautics and Astronautics

    James Hansen

    Graduate Student

    School of Engineering

    Aeronautics and Astronautics

    Khaled Younes

    Graduate Student

    School of Engineering

    Mechanical Engineering

    Seth Liyanage

    Graduate Student

    School of Engineering

    Mechanical Engineering

    Allison Cong

    Graduate Student

    School of Engineering

    Mechanical Engineering

    Matthias Ihme

    Faculty Sponsor

    School of Engineering

    Mechanical Engineering/SLAC

  • We are interested in the conditions under which human AI collaboration leads to better decision-making. Algorithms are increasingly being used in high-stake settings, such as in medical diagnosis and refugee settlement. However, algorithmic recommendation in the setting of human AI collaboration can lead to perverse effects. For example, doctors may not put in as much effort when recommendations from algorithms are readily available. Similarly, the introduction of algorithmic recommendation can cause moral hazard, leading to worse decision-making. Our affinity group would like to explore the conditions and incentives affecting human AI collaboration, integrating theories from political science, communication, and HCI.

    Name

    Role

    School

    Department

    Eddie Yang

    Graduate Student Co-Lead

    School of Humanities & Sciences

    Center on Democracy, Development and the Rule of Law

    Yingdan Lu

    Graduate Student Co-Lead

    School of Humanities & Sciences

    Communication

    Matt DeButts

    Graduate Student

    School of Humanities & Sciences

    Communication

    Yiqin Fu

    Graduate Student

    School of Humanities & Sciences

    Political Science

    Yiqing Xu

    Faculty Sponsor

    School of Humanities & Sciences

    Political Science

  • Recent developments in foundation models like Stable Diffusion and GPT-3 have enabled AI to create in ways that were previously only possible by humans—marking an evolution of AI from a problem-solving machine to a generative machine. Simultaneously, we are seeing these models moving from research to  industry. The productization of AI for creative purposes (writing, image generation, etc.) is just beginning to emerge, but will accelerate in the coming years, impacting the media industry and creatives of all kinds (filmmakers, photographers, writers, professional artists, etc.).

    While hype around these new tools for creativity is exploding in the media, we have yet to find a student community at Stanford dedicated to exploring the future of creative generative AI. We are interested in understanding the technical capabilities of generative AI models, current product innovations in industry, the impact of generative AI on the future of art creation, and the social and cultural implications of new creative tools. As our team comes from a range of backgrounds (Computer Science, Symbolic Systems, Political Science, and English), our breadth of expertise will enable us to engage in cross-disciplinary conversations.

    Name

    Role

    School

    Department

    Isabelle Levent

    Undergraduate Student Co-Lead

    School of Humanities & Sciences

    Symbolic Systems

    Lila Shroff

    Undergraduate Student Co-Lead

    School of Humanities & Sciences

    English

    Regina Ta

    Graduate Student

    School of Humanities & Sciences

    Symbolic Systems

    Millie Lin

    Graduate Student

    School of Engineering

    Computer Science

    Sandra Luksic

    Research Assistant

    School of Humanities & Sciences

    Research Assistant, Ethics in Society

    Mina Lee

    Graduate Student

    School of Engineering

    Computer Science

    Michelle Bao

    Graduate Student

    School of Engineering

    Computer Science

    Rob Reich

    Faculty Sponsor

    School of Humanities & Sciences

    Political Science

  • HAI graduate fellows are planning to host a panel with three AI experts from Academic, Government, and Industry moderated by a comedian as an effort to lower the barrier of entry into the AI conversation. This event will join HAI’s effort to raise awareness and inform the general public regarding AI limitations and how AI can empower human capabilities. Our goals are to solidify the HAI graduate fellow community, connect HAI graduate fellows with the general public and Stanford community, start a fun and entertaining conversation about AI limitations, and engage with AI experts in academia, government and industry in an informal setting.

    Name

    Role

    School

    Department

    Alberto Tono

    Graduate Student Co-Lead

    School of Engineering

    Civil and Environmental Engineering

    Martino Banchio

    Graduate Student Co-Lead

    Graduate School of Business

    Graduate School of Business

    Yingdan Lu

    Graduate Student

    School of Humanities & Sciences

    Communication

    Surin Ahn

    Graduate Student

    School of Engineering

    Electrical Engineering

    Betty Xiong

    Graduate Student

    School of Medicine

    Biomedical Informatics

    Martin Fischer

    Faculty Sponsor

    School of Engineering

    Civil and Environmental Engineering

  • Our group will develop tools that will improve the next-generation of foundation models. We plan on doing research at different stages of foundation model development. Our research specifically focuses on training foundation models and large language models on newer modalities such as structural biology and joint image-text pairings, and evaluating methods with meta-learning downstream task performance-in-the-loop. Most foundation model research has been in text or image data, and we plan on focusing our work in new multimodal and biological data types as well as evaluating downstream fine-tuned task performance using in-the-loop meta-learning techniques.

    Name

    Role

    School

    Department

    Rohan Koodli

    Graduate Student Co-Lead

    School of Medicine

    Biomedical Data Science

    Gautam Mittal

    Graduate Student Co-Lead

    School of Engineering

    Computer Science

    Rajan Vivek

    Graduate Student

    School of Engineering

    Computer Science

    Douwe Kiela

    Faculty Sponsor

    School of Humanities & Sciences

    Symbolic Systems

  • We propose to develop an affinity group with the topic and purpose of advancing theoretical understandings of human interaction and trust with AI-based systems and technologies. For AI to augment human intelligence while having humans in charge, we must understand how humans interact with AI technologies and build up trust with such systems. Understanding trust in human-AI interaction is critical to develop AI systems that are ethical, safe, authentic, and trustworthy. Despite the importance of this relationship, little attention in the literature has been devoted to advancing theoretical and practical knowledge of human-computer interaction with AI systems. Particularly, there is a gap on this issue that can be addressed by a multidisciplinary approach, such as leveraging knowledge across cognitive psychology, computer science, and user design fields. We also believe that such understanding with an ultimate theoretical framework developed by the end of our affinity group meetings and discussions will be beneficial for researchers and practitioners across disciplines to advance applications of AI systems in real-world problems of human lives.

    Name

    Role

    School

    Department

    Alaa Youssef

    Postdoc Co-Lead

    School of Medicine

    Radiology

    Xi Jia Zhou

    Graduate Student Co-Lead

    Graduate School of Education

    Graduate School of Education

    Michael Bernstein

    Faculty Sponsor

    School of Engineering

    Computer Science

Contact Us

For more information, contact HAI Research Associate Christine Raval.