Skip to main content Skip to secondary navigation
Page Content

The Stanford Institute for Human-Centered AI and the Stanford Accelerator for Learning have awarded 16 teams a total of $625,000 in seed grants to explore innovative uses of generative AI in education.

The 10 faculty projects and six student/staff projects receiving the new “Generative AI for the Future of Learning” grants will develop AI tools to both improve teaching and learning and also better understand critical issues in the learning context. Funded projects include interactive learning simulations and chatbots that help teachers share student feedback or enable self-paced learning in underserved communities.

“Most generative AI, which creates novel content using text, audio files, and images, are not built for educational purposes,” said Vanessa Parli, Stanford HAI director of research. “We have the opportunity to help scholars research and shape applications of this emerging technology in an ethical, equitable, and safe manner, grounded in pedagogy.”

“The level of interest from the Stanford community around the topic of generative AI and the future of learning has been tremendous,” said Victor Lee, associate professor in the Stanford Graduate School of Education and faculty lead for AI+Education. “We have been hearing loudly from faculty, staff, and students that they want to take an active role in shaping how these new technologies can be powerful for learners of all backgrounds and ages in ways that are socially responsible and forward-thinking.”

“A goal of the seed grant program is to nurture the development of a community of Stanford scholars who aim to both understand and build the future of learning with generative AI technologies,” said Cathy Chase, director of research for AI+Education. To support this effort, the Stanford Accelerator for Learning will hold regular workshops and meetings where seed grantees can get feedback on their developing work and learn about generative AI technologies or instructional design.

Research teams include scholars from the Graduate School of Education, School of Engineering, School of Humanities and Sciences, School of Medicine, and Graduate School of Business, reflecting the importance of cross-disciplinary efforts to understand this new technology. 

Stanford HAI contributed $225,000 and the Accelerator contributed $365,000 to the 10 faculty projects, which all underwent a rigorous Ethics and Society Review as part of the Stanford HAI grant-making process. The Accelerator fully funded the six staff and student projects, at a total of $35,000.

Awardees (Faculty):

Authoring Interactive Simulations with Generative AI for Culturally Sustaining Pedagogy

Research team: Hari Subramonyam, Nick Haber, Shima Salehi, Maneesh Agrawala, Roy Pea

Educational simulations such as PhET can increase learner engagement and promote generative learning. Yet, it is nearly impossible for educators to create their own interactive simulations to support their learners. These scholars will develop the Simulations Adaptive Learning Tool (SALT), a tool that leverages generative AI capabilities to allow educators to create or adapt interactive content to meet learners’ needs. Using SALT, educators can personalize the simulations in ways that nurture students' cultural and linguistic diversity, which can enhance the effectiveness of their learning experience.

Generating Descriptions of Data Visualizations to Improve Accessibility and Learning Outcomes in STEM Education

Research team: Christopher Potts, Judith Fan, Elisa Kreiss

Data visualizations are indispensable for communicating patterns in quantitative data and are crucial in STEM learning contexts. Unfortunately, these visualizations are only rarely accompanied by high-quality descriptions that would make them more accessible to blind and low-vision learners. These scholars will use research in AI, cognition, and education to make complex data visualizations more accessible through development of datasets containing high-quality descriptions of many kinds of data visualizations; training of AI systems that generate descriptions for novel data visualizations; and measurement of the impact of human and model-generated descriptions on learner comprehension.

Teach M-Powered: A Tool for Teachers to Support Students’ Learning Mindset Through Written Feedback

Research team: Dora Demszky, Mei Tan, Rose Wang 

Providing timely, personalized, and mindset-supportive feedback to students is an integral part of high-quality instruction, yet it is a nontrivial and extremely time-intensive task. These scholars will develop Teach M-Powered, a generative AI-powered tool that assists teachers with writing effective feedback to students. 

Humanizing AI for Better Collaborative Learning

Research team: John Mitchell, Jennifer Langer-Osuna, Glenn Fajardo

This project aims to explore various approaches to AI-assisted collaborative learning and develop and evaluate sample uses over the coming year. The team will develop an action research community where students collaborate on toolkits that will be offered to Stanford project partners for their teaching and learning environments next academic year, while examining GenAI ethical principles and AI challenges for education.

College Writing with the BlackRhetorics Corpus for Generative Models

Research team: Adam Banks, Harriet Jernigan, Tolulope Ogunremi, Onyothi Nekoto

Since ChatGPT was released in November 2022 and many other models followed, researchers have studied their inability to generate African American English (AAE) in conversation with Black student communities. Such a deficit arises from the corpora that commercial generative models deploy. These scholars will build on the TwitterAAE and CORAAL corpora with their own data set called BlackRhetorics and use NLP transfer learning and dialect techniques to improve the tools for Black student research. This Black research team demonstrates how to deploy generative models for inclusive Black language pedagogies.

Unlocking Precision Medicine: Innovative Training & AI Chatbot for Self-Paced Learning in Underserved Communities

Research team: Michael Synder, Anshul Kundaje, Amir Bahmani

This project addresses the challenge of limited access to quality education and software in the rapidly growing field of biomedical data, which generates vast amounts of data requiring advanced computational skills to process. The team proposes expanding the Stanford Data Ocean platform with AI chatbots like ChatGPT to support interdisciplinary concepts in learning Precision Medicine. Their integrated curricula will be customized to address major challenges in accessing quality education for underserved communities.

Detecting AI-Generated Text in the Classroom

Research team: Chelsea Finn, Christopher Manning, Eric Mitchell 

Large language models like ChatGPT are tempting tools for students to use to complete various forms of assessments, from rhetorical writing to programming. Inspired by this problem, this team recently released DetectGPT, which uses an LLM to automatically detect its own outputs. While DetectGPT and related systems recently developed by OpenAI and Turnitin are promising steps toward automated detection of machine-generated text, standardized measurements of detector quality are missing, making comparison of detectors impossible and leaving educators in the dark about whether a detector is trustworthy. The research team proposes a new benchmark for machine-generated text detectors, addressing blind spots in existing evaluations. They will use this evaluation suite to develop the next generation of detection algorithms.

Evaluating ChatGPT’s Capability in Supporting and Augmenting Real-World Problem Solving

Research team: Carl Wieman, Shima Salehi, Nick Haber, Karen Wang

This project aims to examine the potential of generative AI models in facilitating authentic problem solving in science and engineering domains, and to determine the extent to which college students can learn to leverage AI to enhance their problem-solving practices and outcomes. The research team will also explore how science and engineering experts use ChatGPT to augment their problem solving, which will lead to a framework of AI-human collaborative problem-solving practices. The research will have important implications for STEM education and how to prepare students for a future of human-AI collaboration.

MAI-TA: A Medical AI Teaching Assistant Using Conversational GPT-3 and Virtual Reality for Remote Medical Education

Research team: Sakti Srivastava, Ken Salisbury, Joel Sadler, Christoph Leuze, Samrawit Gebregziabher

This project aims to use conversational AI and virtual reality to create interactive 3D avatars of medical virtual teaching assistants that can simulate real-world medical training in virtual environments. This team proposes MAI-TA, a medical conversational virtual agent that can supplement in-person teaching for personalized exploratory learning. Leveraging prior work on educational VR with anatomy photogrammetry scans, they will integrate OpenAI’s GPT-3 to afford students a conversational way to explore digital anatomical specimens with customized guidance in a virtual lab setting. This project builds on previous research demonstrating that VR and digital anatomy labs can broaden access to medical training with underrepresented and underresourced learners. 

Novel Pedagogy and Assessment Using Generative Models

Research team: Russell Berman, Ruth Starkman

This project uses generative models to engage students in the invaluable process of critical thinking and writing. The research team proposes deploying ChatGPT in their Stanford course ESF 17/17A What Can You Do for Your Country?, which asks students to read historically important texts about public service, from John F. Kennedy’s speech to Thucydides’ “Pericles’ Funeral Oration,” Lincoln’s “Gettysburg Address,” Frederick Douglass’ “What to the Slave Is the Fourth of July,” and many others. Thus far, they have seen that generative models can help students better learn and articulate their ideas about public service. The team expects that building new pedagogical approaches and assessment including generative models will add pedagogical value.

Awardees (Students, Staff, and Postdocs):

“Can You Hear Me?”: How Native Users of Non-Dominant Sociolects Adapt, Contort, and Remix LLMs

Research team: Laura Hill-Bonnet, Parth Sarin

This team will investigate how native users of non-dominant sociolects engage with large language models in ways that replicate and contest dominant linguistic hierarchies. They will build theories about language and learning in a society with generative AI, exploring emerging concerns about the effect of LLMs like fears that AI will homogenize, formalize, and unduly arbitrate language.

Neurodiversity and Generative AI: Enhancing Creative Self-Expression in Students with Disabilities in Bangladesh

Research team: Labib Tazwar Rahman

This project will explore the use of generative AI in enhancing the learning experience of neurodiverse students, especially those from low-income backgrounds who have limited literacy skills and/or are non-verbal. The project will incorporate generative AI into the curriculum of “Joy of Computing” – a computer training program in Dhaka, Bangladesh. Students will be able to generate creative content that they will incorporate into their personalized coursework and homework.

Documenting, Co-Designing, and Publishing Teachers’ Strategies for Teaching Writing with ChatGPT

Research team: Chris Mah

AI applications like ChatGPT are often framed as threats to writing teachers; however, they also have tremendous potential to help teachers perform their jobs more effectively. This project aims to document strategies writing teachers are already using to leverage ChatGPT, co-design new strategies with teachers, and create a publicly accessible framework of strategies that teachers can use as a reference. 

Helping K-12 Teachers Plan Better Lessons in Less Time

Research team: Rizwaan Malik, Claire Chen, Zaeem Bhanji, Stephanie Seidmon, Sonya Kotov, Manasi Sharma

Teachers spend an average of three hours planning lessons per day – time they could be spending directly supporting students or taking care of themselves. Scholars will build a new AI tool to drastically reduce the time teachers spend creating lesson materials. By leveraging AI, teachers will be able to rapidly edit and refine lesson materials with natural language prompts. The system will also make suggestions for how to incorporate evidence-informed pedagogical strategies into the lesson materials. In doing so, the tool helps teachers prepare better lesson materials in less time.

Museum in the Classroom: Enhancing Learning Engagement and Comprehension of School Topics Through an AR-Based Educational App

Research team: Carina Ly, Alan Cheng, Andrea Cuadra 

In today’s digital-oriented world, the traditional classroom setup often fails to capture students’ attention and stimulate their curiosity. To address this, this project will develop an educational app that leverages augmented reality technology to increase classroom engagement. Educators can list what topics they are currently teaching and the app will use AR and artificial intelligence to display an AR museum filled with AI-generated artifacts that are related to the inputted topic and can be represented through common classroom items. The app will guide teachers to lightly rearrange the classroom to emulate a museum. Students then use an iPad to scan the classroom items, which will be associated with the artifacts. Students will receive information about them, as well as trivia-like questions to test their understanding.

Developing Novice Programmers’ Capacity for Critical Reflection on Generative AI

Research team: Benjamin Xie

This research explores how novice programmers can critically reflect on generative AI to effectively use GenAI tools to augment their programming processes. GenAI tools such as GitHub Copilot can help experienced programmers write, interpret, and test code. These tools may also be able to help the growing number of people interested in learning to code. However, GenAI tools have opaque limitations and biases that can lead to confusion, frustration, or diminished self-efficacy of learners, especially learners from minoritized groups. This project seeks to understand how novice programmers can use Copilot to support writing, interpreting, and evaluating code. This research will take a critical stance to GenAI, situating it as a problematic yet powerful tool that requires constant critical reflection to determine appropriate usage. 

Learn more about Stanford HAI research opportunities.

More News Topics