Blog
Explore Blog Posts
Using NLP to Detect Mental Health Crises
Scholars develop a new model to surface high-risk messages and dramatically reduce the time it takes to reach a patient...
LLMs Aren’t Ready for Prime Time. Fixing Them Will Be Hard.
Researchers call for an academic-industry partnership on the scale of the Human Genome Project to make large language...
ChatGPT Out-scores Medical Students on Complex Clinical Care Exam Questions
A new study shows AI's capabilities at analyzing medical text and offering diagnoses — and forces a rethink of medical...
A New Approach Trains Large Language Models in Half the Time
A Stanford team has developed Sophia, a new way to optimize the pretraining of large language models that’s twice as...
A Blueprint for Using AI in Psychotherapy
Scholars outline the best possible uses for these tools and the path to deployment.
Addressing Equity in Natural Language Processing of English Dialects
The Multi-VALUE framework achieves consistent performance across dozens of English dialects.
Assessing Political Bias in Language Models
Researchers develop a new tool to measure how well popular large language models align with public opinion to evaluate...
AI-Detectors Biased Against Non-Native English Writers
Don’t put faith in detectors that are “unreliable and easily gamed,” says scholar.
What is a Foundation Model? An Explainer for Non-Experts
These powerful machine learning algorithms sit at the core of many generative AI tools today.
Diyi Yang: Human-Centered Natural Language Processing Will Produce More Inclusive Technologies
In her course called Human-Centered NLP, Yang challenges students to think beyond technical performance or accuracy.
AI’s Ostensible Emergent Abilities Are a Mirage
According to Stanford researchers, large language models are not greater than the sum of their parts.
How Well Do Large Language Models Support Clinician Information Needs?
Stanford experts examine the safety and accuracy of GPT-4 in serving curbside consultation needs of doctors.