Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
AI Challenges Core Assumptions in Education | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

AI Challenges Core Assumptions in Education

We need to rethink student assessment, AI literacy, and technology’s usefulness, according to experts at the recent AI+Education Summit.

The 2026 AI+Education Summit, hosted by the Stanford Institute for Human-Centered AI and the Stanford Accelerator for Learning, took place on Feb. 11 on the Stanford campus. All photos by Ryan Zhang.

headshot
Shana Lynch
Link copied to clipboard!
February 19, 2026
headshot
Shana Lynch
February 19, 2026
"How do we make sure that it's our teachers and our educators, especially those working with the most marginalized students, who are at the forefront of driving how we utilize AI?"
Wendy Kopp
Teach for All founder
Education, Skills
Generative AI
Privacy, Safety, Security
Share:
Link copied to clipboard!

Related News

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test
Andrew Myers
Feb 02, 2026
News
illustration of data and lines

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

News
illustration of data and lines

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test

Andrew Myers
Foundation ModelsGenerative AIPrivacy, Safety, SecurityFeb 02

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

Stanford HAI and Swiss National AI Institute Form Alliance to Advance Open, Human-Centered AI
Jan 22, 2026
Announcement
Your browser does not support the video tag.

Stanford, ETH Zurich, and EPFL will develop open-source foundation models that prioritize societal values over commercial interests, strengthening academia's role in shaping AI's future.

Announcement
Your browser does not support the video tag.

Stanford HAI and Swiss National AI Institute Form Alliance to Advance Open, Human-Centered AI

Education, SkillsJan 22

Stanford, ETH Zurich, and EPFL will develop open-source foundation models that prioritize societal values over commercial interests, strengthening academia's role in shaping AI's future.

Most-Read: The Stanford HAI Stories that Defined AI in 2025
Shana Lynch
Dec 15, 2025
News
illustration of people reading computers, phones, and print

Readers wanted to know if their therapy chatbot could be trusted, whether their boss was automating the wrong job, and if their private conversations were training tomorrow's models.

News
illustration of people reading computers, phones, and print

Most-Read: The Stanford HAI Stories that Defined AI in 2025

Shana Lynch
Economy, MarketsGenerative AIHealthcareDec 15

Readers wanted to know if their therapy chatbot could be trusted, whether their boss was automating the wrong job, and if their private conversations were training tomorrow's models.

Last week, Stanford Institute for Human-Centered AI and the Stanford Accelerator for Learning convened educators, researchers, technologists, policy experts, and more for the fourth annual AI+Education Summit. The day featured keynotes and panel discussions on the challenges and opportunities facing schools, teachers, and students as AI transforms the learning experience.

At the summit, several themes emerged: AI has created an assessment crisis – student projects no longer indicate a strong learning process; schools are awash with too many AI products and need better evaluations and sustainable adoption models; AI’s benefits aren’t equitable; AI literacy is a non-negotiable; human connection is irreplaceable.

Read a few of the highlights from the Feb. 11, 2026 event, and watch the full conference on YouTube.

AI’s Inequitable Impact

AI amplifies whatever educational foundation already exists, said Wendy Kopp, founder of Teach for All. In mission-driven schools with strong pedagogy, AI becomes a powerful tool for teachers and learners. But without a strong pedagogy and guidelines, the technology becomes a distraction.

Miriam Rivera, of Ulu Ventures, said a critical distinction emerges between consumption and creation of AI. In well-resourced schools, she said, students often learn to create with technology (3D printing, coding), while in less-resourced schools, students merely consume it.

Both panelists said that equity-focused teachers and students from marginalized communities must be at the forefront of designing AI applications, not just receiving them.

Dennis Wall, a Stanford School of Medicine professor, illustrated one way this might look. His team is developing a gamified framework to support children struggling with social communication skills. His lab is co-designing these resources with the teachers, therapists, and parents who will use them, ensuring the tools are accessible, engaging, and informed.

AI Literacy Is a Must-Have

Education has long assumed that strong products (homework, summative tests, problem sets) indicate strong learning processes, said Mehran Sahami, a Stanford School of Engineering professor. AI has broken this assumption. Students can now generate impressive products without engaging in meaningful learning. This directs educators to focus on assessing and supporting the actual learning process rather than just evaluating end products. 

"Our challenge with generative AI is not to consider it as a tool, but consider it as a topic, and build a curriculum in which we teach how to use it to foster things like creativity and deeper educational outcomes." — Mehran Sahami, School of Engineering professor

More so, we can’t treat AI solely as a tool. Students need a systemic curriculum on AI. Sahami proposed a progression: Introduce what AI is; teach about hallucinations and bias; show how to verify AI outputs; teach advanced techniques like prompting. Without this structured approach, students teach themselves – and 70-80% use AI to short-circuit learning rather than enhance it.

Mike Taubman, a teacher at North Star Academy in Newark, N.J., developed an “AI driver’s license” curriculum that maps the adolescent rite of passage of getting a driver’s license onto AI literacy. The goal is to put students in the driver’s seat, not the passenger seat, when it comes to AI. The four-part curriculum includes choosing a destination (students learn to ask want they want of AI); learning how to drive (see how these tools work and what it means to prompt, develop agentic workflows, etc.); opening the hood (understand their limitations and risks); and defining the rules of the road (decide what AI should and shouldn’t do).

Understand AI’s Learning Harms

Guilherme Lichand, assistant professor at Stanford Graduate School of Education, studied AI’s impact on creativity for middle school students in Brazil. He compared AI assistance with guardrails (if students ask for 10 words, the AI would give only 3) against no assistance across creativity tasks.

Students with AI assistance performed better on the task while they had the tool. But when assistance was removed within the same test, the advantage disappeared – suggesting no immediate positive transfer.

While that finding isn’t surprising, he said, the results from a follow-up creative task were more concerning:

  • Students who never had AI performed best.

  • Students with continued AI or new AI access performed slightly worse (not statistically significant).

  • Students who lost AI access after having it performed dramatically worse – four times worse than their initial advantage.

This wasn’t just about missing the tool – students had less fun and began believing AI was more creative than they were, he said, suggesting AI damaged their creative self-concept.

"The bottleneck is we have too many pilots actually, and still not enough implementations that are actually effective." — Susan Athey, Stanford Graduate School of Business professor and HAI senior fellow

A “Too Many Pilots” Problem

Today we have no shortage of AI products, said Stanford Graduate School of Business Professor and HAI Senior Fellow Susan Athey, but we lack effective implementation and adoption. Schools and districts are slow to adopt new tools because of historical software lock-in and the opportunity costs of training teachers on systems that may fail.

Athey also noted a “teaching to the test” problem for developers. If teachers spend more time on an interface, does that mean it’s good and they’re deeply engaged, or does that mean it’s terrible and they’re spending time trying to make it work? Education tools need multifaceted measurement approaches: human review, AI “guinea pigs” (simulated students to test products before real children do), and careful evaluation of what’s actually being measured. She advocated for digital public goods like evaluation tools, testing frameworks, and validated AI student simulations that could be developed by universities and philanthropy to create robust measurement infrastructure the whole sector can use.

Never Replace Real Relationships

Nearly half of all generative AI users are under 25, said Amanda Bickerstaff, CEO of AI for Education; that’s over 300 million active monthly users of ChatGPT alone who are under 25. Students use AI more for mental health and well-being – seeking connection, support, and understanding – than for schoolwork. Bickerstaff warned about cognitive offloading, mental health offloading, and even “belief off-loading,” where AI fundamentally shapes how people think, with just four or five chatbot makers having outsized influence on billions of users.

Because of that, she said, we must equip people with knowledge, skills, and mindsets to understand when and how to use AI and, crucially, when not to use it. 

The most vulnerable, according to new research from Pilyoung Kim, a Stanford professor of psychology and director of the Center for Brain, AI, and Child (BAIC), are young people lacking human connections. She asked over 260 middle school students and their parents to compare and share preferences between two chatbot conversation styles: A “best friend” that was highly relational and would respond with comments like, “That must be so upsetting. Your ideas matter so much. I’m always here to listen,” and a more transparent version that set boundaries and reminded the user that it was an AI. 

"If they have more unmet social needs, it is possible that they’re more drawn to an AI that provides social connections. That might put them in more vulnerable positions to overly rely on a relationship that is not real." — Pilyoung Kim, a Stanford professor of psychology and the director of the Brain, Artificial Intelligence, and Child (BAIC)

More adolescents preferred the relational AI, and even more than half of parents chose the relational AI for their teens, reasoning it would be more effective at supporting issues their children might not share with them directly.

But more importantly, children who chose the relational AI were also more likely to report feeling stressed or anxious, and they reported a lower family relationship quality. 

“If they have more unmet social needs, it is possible that they’re more drawn to an AI that provides social connections,” Kim said. “That might put them in more vulnerable positions to overly rely on a relationship that is not real.”

She emphasized the common thread of the day: AI should never replace human connection.

The AI+Education Summit is co-hosted by the Stanford Accelerator for Learning and the Stanford Institute for Human-Centered Artificial Intelligence (HAI). Watch the summit videos here.

The 2026 AI+Education Summit took place on Stanford's campus on Feb. 11, 2026.

The day included poster sessions, panel discussions, and keynotes from leading education experts and technologists.

Attendees also participated in small workshops throughout the day.

2026 AI+Education Summit

Miss the conference or want to revisit a panel discussion? Visit the HAI YouTube to catch all the sessions.

2026 AI+Education Summit