From Privacy to ‘Glass Box’ AI, Stanford Students Are Targeting Real-World Problems

An Amazon-backed fellowship will support 10 Stanford PhD students whose work explores everything from how we communicate to understanding disease and protecting our data.
Noah Cowan wants to help paralyzed people speak through computers with less friction. Valeh Valiollah Pour Amiri is building what she calls a “glass box” AI model that could one day simulate an entire virtual cell. Ken Ziyu Liu is on a mission to protect people from being profiled by the very AI tools they rely on.
These three students are among 10 Stanford PhD candidates who will receive two years of funding and AWS credits through Amazon’s AI PhD Fellowship program, which supports emerging researchers tackling some of AI’s most challenging problems. Their work represents diverse fields from aeronautics and astronautics to deep learning, natural language processing, genetics, and statistics.
The AI PhD Fellowship program aims to drive innovation in practical, useful AI. Amazon is supporting more than 100 scholars across nine universities, offering compute resources, mentorship, and in-person meetings with other scholars.
“This program will help Stanford prepare the next generation of scientific leaders to solve real-world problems,” said David Studdert, Stanford vice provost and dean of research. “It will dramatically accelerate their ability to translate bold ideas into meaningful impact.”
Meet three of the fellows exploring how AI and data science can better serve the world:
The Open Anonymity Project
In the Stanford AI Lab (SAIL), computer science PhD student Ken Ziyu Liu studies the intersection of foundation models, data, and user privacy. He’s on a mission to create better privacy tools that protect humans in an AI-driven world. “I tell my peers that OpenAI probably knows more about you than your parents,” he says. “We want to use these tools, but you can be targeted and profiled very accurately.”
With the Open Anonymity Project, Liu and SAIL’s visiting researcher, Erik Chi, are building a privacy layer for ChatGPT that’s similar to a virtual private network (VPN). This layer removes a user’s identity from the LLM request, making the transaction appear anonymous to the model. One benefit of their approach to privacy is that users would have the only copy of their personal AI activity histories, enabling a digital memory profile controlled by the user, not the AI provider. “Imagine walking around with your own data on your phone, and you decide what information to share with businesses, doctors, and other third parties each time you interact with an AI-powered online service,” Liu explains.
By implementing security technologies commonly used across the software industry – such as blind signatures used in Apple iCloud Private Relay – Liu says the user’s ID becomes detached and untraceable by any model. The user’s chat history is stored locally on their own device, and the model only sees an anonymous interaction for each chat session.
A Mechanistic AI Virtual Cell Model

Genetics PhD student Valeh Valiollah Pour Amiri is developing AI models based on DNA sequences that could help scientists understand how living cells function at the molecular level.
In computational biology, scholars aim to understand biology using computer science. Amiri’s focus is interpretability, or understanding what the sequence model is learning. “We want to pioneer a glass box approach, instead of a black box, so we can take the genetic story all the way to the long-term goal of building a comprehensive virtual cell,” she says.
In the early stages of her work, Amiri is exploring different interpretability tools to help her build specialized models that can explain their own outputs. Each model is designed to capture the essence of a specific stage of genetic regulation – for example transcription factor binding (where proteins bind DNA to regulate gene expression), chromatin accessibility (the degree to which DNA is open or wrapped around histone proteins), and epigenetic modifications (chemical modifications to DNA or histones that affect expression levels without changing the DNA sequence).
Next, these individual models can be assembled in a hierarchy of connected modules that interact with each other in ways that mirror biology. Ultimately, Amiri envisions this network of models as a mechanistic AI virtual cell that simulates how and why a cell behaves. Among other applications, this innovation could accelerate new treatments for disease by making it possible to test hypotheses with AI before moving to wet labs and clinical trials. The interpretable and modular nature of such a model can help pinpoint the exact molecular processes affected by various genetic modifications.
AI for Neuroscience

Noah Cowan is applying his background in machine learning and computational neuroscience to improve brain-computer interfaces (BCIs) that could give people who are paralyzed and unable to speak a better way of communicating with loved ones.
A third-year statistics PhD student in the Linderman Lab – part of the Statistics Department and the Wu Tsai Neurosciences Institute at Stanford – Cowan specializes in niche mathematical approaches that help models cope with shifts in data distribution.
“Modern brain-computer interfaces rely on neural networks to translate human thoughts into text. But brain signals change rapidly, making the models less accurate over time,” Cowan says. “We’re building on prior solutions to this problem with a statistical approach that considers multiple likely possibilities to what the user intended to say. Ultimately, we hope users of BCI technology will experience less friction and a smoother communication experience.”
Learn more about the AI PhD Fellowship program and see the 10 Stanford fellows selected for this two-year opportunity.
.jpg&w=256&q=80)
.jpg&w=1920&q=100)

