From Privacy to ‘Glass Box’ AI, Stanford Students Are Targeting Real-World Problems | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • AI Glossary
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

news

From Privacy to ‘Glass Box’ AI, Stanford Students Are Targeting Real-World Problems

Date
February 27, 2026
Topics
Generative AI
Healthcare
Privacy, Safety, Security
Computer Vision
Sciences (Social, Health, Biological, Physical)

An Amazon-backed fellowship will support 10 Stanford PhD students whose work explores everything from how we communicate to understanding disease and protecting our data.

Noah Cowan wants to help paralyzed people speak through computers with less friction. Valeh Valiollah Pour Amiri is building what she calls a “glass box” AI model that could one day simulate an entire virtual cell. Ken Ziyu Liu is on a mission to protect people from being profiled by the very AI tools they rely on.

These three students are among 10 Stanford PhD candidates who will receive two years of funding and AWS credits through Amazon’s AI PhD Fellowship program, which supports emerging researchers tackling some of AI’s most challenging problems. Their work represents diverse fields from aeronautics and astronautics to deep learning, natural language processing, genetics, and statistics.

The AI PhD Fellowship program aims to drive innovation in practical, useful AI. Amazon is supporting more than 100 scholars across nine universities, offering compute resources, mentorship, and in-person meetings with other scholars.

“This program will help Stanford prepare the next generation of scientific leaders to solve real-world problems,” said David Studdert, Stanford vice provost and dean of research. “It will dramatically accelerate their ability to translate bold ideas into meaningful impact.”

Meet three of the fellows exploring how AI and data science can better serve the world:

The Open Anonymity Project

In the Stanford AI Lab (SAIL), computer science PhD student Ken Ziyu Liu studies the intersection of foundation models, data, and user privacy. He’s on a mission to create better privacy tools that protect humans in an AI-driven world. “I tell my peers that OpenAI probably knows more about you than your parents,” he says. “We want to use these tools, but you can be targeted and profiled very accurately.”

With the Open Anonymity Project, Liu and SAIL’s visiting researcher, Erik Chi, are building a privacy layer for ChatGPT that’s similar to a virtual private network (VPN). This layer removes a user’s identity from the LLM request, making the transaction appear anonymous to the model. One benefit of their approach to privacy is that users would have the only copy of their personal AI activity histories, enabling a digital memory profile controlled by the user, not the AI provider. “Imagine walking around with your own data on your phone, and you decide what information to share with businesses, doctors, and other third parties each time you interact with an AI-powered online service,” Liu explains. 

By implementing security technologies commonly used across the software industry – such as blind signatures used in Apple iCloud Private Relay – Liu says the user’s ID becomes detached and untraceable by any model. The user’s chat history is stored locally on their own device, and the model only sees an anonymous interaction for each chat session.

A Mechanistic AI Virtual Cell Model

Genetics PhD student Valeh Valiollah Pour Amiri is developing AI models based on DNA sequences that could help scientists understand how living cells function at the molecular level.

In computational biology, scholars aim to understand biology using computer science. Amiri’s focus is interpretability, or understanding what the sequence model is learning. “We want to pioneer a glass box approach, instead of a black box, so we can take the genetic story all the way to the long-term goal of building a comprehensive virtual cell,” she says.

In the early stages of her work, Amiri is exploring different interpretability tools to help her build specialized models that can explain their own outputs. Each model is designed to capture the essence of a specific stage of genetic regulation –  for example transcription factor binding (where proteins bind DNA to regulate gene expression), chromatin accessibility (the degree to which DNA is open or wrapped around histone proteins), and epigenetic modifications (chemical modifications to DNA or histones that affect expression levels without changing the DNA sequence).

Next, these individual models can be assembled in a hierarchy of connected modules that interact with each other in ways that mirror biology. Ultimately, Amiri envisions this network of models as a mechanistic AI virtual cell that simulates how and why a cell behaves. Among other applications, this innovation could accelerate new treatments for disease by making it possible to test hypotheses with AI before moving to wet labs and clinical trials. The interpretable and modular nature of such a model can help pinpoint the exact molecular processes affected by various genetic modifications.

AI for Neuroscience

Noah Cowan is applying his background in machine learning and computational neuroscience to improve brain-computer interfaces (BCIs) that could give people who are paralyzed and unable to speak a better way of communicating with loved ones. 

A third-year statistics PhD student in the Linderman Lab – part of the Statistics Department and the Wu Tsai Neurosciences Institute at Stanford – Cowan specializes in niche mathematical approaches that help models cope with shifts in data distribution.

“Modern brain-computer interfaces rely on neural networks to translate human thoughts into text. But brain signals change rapidly, making the models less accurate over time,” Cowan says. “We’re building on prior solutions to this problem with a statistical approach that considers multiple likely possibilities to what the user intended to say. Ultimately, we hope users of BCI technology will experience less friction and a smoother communication experience.”

Learn more about the AI PhD Fellowship program and see the 10 Stanford fellows selected for this two-year opportunity. 

Share
Link copied to clipboard!
Contributor(s)
Nikki Goth Itoi

Related News

Collaborative Coding, Better Scaling, Health Tracking: HAI Awards $2.17M to Innovative Research
Nikki Goth Itoi
Apr 29, 2026
Announcement
Your browser does not support the video tag.

Seed grants will fund 29 research teams pursuing novel research ideas across disciplines.

Announcement
Your browser does not support the video tag.

Collaborative Coding, Better Scaling, Health Tracking: HAI Awards $2.17M to Innovative Research

Nikki Goth Itoi
HealthcareSciences (Social, Health, Biological, Physical)Apr 29

Seed grants will fund 29 research teams pursuing novel research ideas across disciplines.

What Is AI Sovereignty And Why Are Companies Chasing After It?
IT Brew
Apr 27, 2026
Media Mention

"Countries pursue AI sovereignty with four main objectives in mind: cultural autonomy, national security, economic competitiveness, and regulatory oversight," says Juan N. Pava, Stanford HAI Research Fellow.

Media Mention
Your browser does not support the video tag.

What Is AI Sovereignty And Why Are Companies Chasing After It?

IT Brew
DemocracyInternational Affairs, International Security, International DevelopmentRegulation, Policy, GovernancePrivacy, Safety, SecurityApr 27

"Countries pursue AI sovereignty with four main objectives in mind: cultural autonomy, national security, economic competitiveness, and regulatory oversight," says Juan N. Pava, Stanford HAI Research Fellow.

An AI Health Coach Could Change Your Mindset
Katharine Miller
Apr 23, 2026
News
A runner with a smartphone laces her shoes

Bloom, a health coaching app created by Stanford researchers, helps people tap into their own motivations.

News
A runner with a smartphone laces her shoes

An AI Health Coach Could Change Your Mindset

Katharine Miller
HealthcareGenerative AIApr 23

Bloom, a health coaching app created by Stanford researchers, helps people tap into their own motivations.