Blog
Explore Blog Posts
Using AI to Better Predict Wildfire Spread
Understanding the physics of wind currents above forest canopies may help wildfire managers forecast the flight paths of...
A New Approach Trains Large Language Models in Half the Time
A Stanford team has developed Sophia, a new way to optimize the pretraining of large language models that’s twice as...
A Blueprint for Using AI in Psychotherapy
Scholars outline the best possible uses for these tools and the path to deployment.
Addressing Equity in Natural Language Processing of English Dialects
The Multi-VALUE framework achieves consistent performance across dozens of English dialects.
Reexamining "Fair Use" in the Age of AI
Generative AI claims to produce new language and images, but when those ideas are based on copyrighted material, who...
Could Self-Supervised Learning Be a Game-Changer for Medical Image Classification?
Supervised methods for training medical image models aren’t scalable. A new review highlights the potential of self...
Assessing Political Bias in Language Models
Researchers develop a new tool to measure how well popular large language models align with public opinion to evaluate...
AI-Detectors Biased Against Non-Native English Writers
Don’t put faith in detectors that are “unreliable and easily gamed,” says scholar.
What is a Foundation Model? An Explainer for Non-Experts
These powerful machine learning algorithms sit at the core of many generative AI tools today.
Diyi Yang: Human-Centered Natural Language Processing Will Produce More Inclusive Technologies
In her course called Human-Centered NLP, Yang challenges students to think beyond technical performance or accuracy.
AI’s Ostensible Emergent Abilities Are a Mirage
According to Stanford researchers, large language models are not greater than the sum of their parts.
New Tool Helps AI and Humans Learn To Code Better
Stanford researchers developed a new framework called Parsel that solves complex coding tasks the way humans do —...