Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
News | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Back to News

All News at HAI

NewsMedia Mentions
AI Seeks Out Racist Language in Property Deeds for Termination
Bloomberg Law
Oct 17, 2024
Media Mention

Dan Ho, HAI Senior Fellow and director of the Stanford RegLab, discusses RegLab's AI model that analyzes decades of property records, helping to identify illegal racially restrictive language in housing documents.

Media Mention
Your browser does not support the video tag.

AI Seeks Out Racist Language in Property Deeds for Termination

Bloomberg Law
Machine LearningRegulation, Policy, GovernanceFoundation ModelsLaw Enforcement and JusticeOct 17

Dan Ho, HAI Senior Fellow and director of the Stanford RegLab, discusses RegLab's AI model that analyzes decades of property records, helping to identify illegal racially restrictive language in housing documents.

AI+Education: How Large Language Models Could Speed Promising New Classroom Curricula
Nikki Goth Itoi
Oct 14, 2024
News

Stanford computer science scholars propose using language models to create new learning materials for K-12 students.

News

AI+Education: How Large Language Models Could Speed Promising New Classroom Curricula

Nikki Goth Itoi
Education, SkillsOct 14

Stanford computer science scholars propose using language models to create new learning materials for K-12 students.

I Launched the AI Safety Clock. Here’s What It Tells Us About Existential Risks
TIME
Oct 13, 2024
Media Mention

Despite huge advancements in machine learning and neural networks, AI systems still depend on human direction. This article references HAI's 2022 conference where attendees were encouraged to rethink AI systems with a “human in the loop” and consider a future where people remain at the center of decision making.

Media Mention
Your browser does not support the video tag.

I Launched the AI Safety Clock. Here’s What It Tells Us About Existential Risks

TIME
Machine LearningGenerative AIOct 13

Despite huge advancements in machine learning and neural networks, AI systems still depend on human direction. This article references HAI's 2022 conference where attendees were encouraged to rethink AI systems with a “human in the loop” and consider a future where people remain at the center of decision making.

The 12 Greatest Dangers Of AI
Forbes
Oct 09, 2024
Media Mention

AI expert Gary Marcus references HAI's study showing that LLM responses to medical questions highly vary and are often inaccurate. 

Media Mention
Your browser does not support the video tag.

The 12 Greatest Dangers Of AI

Forbes
Natural Language ProcessingFoundation ModelsGenerative AIOct 09

AI expert Gary Marcus references HAI's study showing that LLM responses to medical questions highly vary and are often inaccurate. 

The Tech Coup: A New Book Shows How the Unchecked Power of Companies Is Destabilizing Governance
Katharine Miller
Oct 07, 2024
News

In The Tech Coup: How to Save Democracy from Silicon Valley, Marietje Schaake, a Stanford HAI Policy Fellow, reveals how tech companies are encroaching on governmental roles, posing a threat to the democratic rule of law. 

News

The Tech Coup: A New Book Shows How the Unchecked Power of Companies Is Destabilizing Governance

Katharine Miller
DemocracyEconomy, MarketsEnergy, EnvironmentRegulation, Policy, GovernanceOct 07

In The Tech Coup: How to Save Democracy from Silicon Valley, Marietje Schaake, a Stanford HAI Policy Fellow, reveals how tech companies are encroaching on governmental roles, posing a threat to the democratic rule of law. 

OpenAI Fast-Tracks AI Agents. How Do We Balance Benefits With Risks?
Forbes
Oct 04, 2024
Media Mention

Peter Norvig, Distinguished Education Fellow at the Stanford HAI, comments on how limiting the budget at an AI agent’s disposal as well as transaction times and capabilities can help AI agents “operate safely within defined boundaries."

Media Mention
Your browser does not support the video tag.

OpenAI Fast-Tracks AI Agents. How Do We Balance Benefits With Risks?

Forbes
Ethics, Equity, InclusionPrivacy, Safety, SecurityOct 04

Peter Norvig, Distinguished Education Fellow at the Stanford HAI, comments on how limiting the budget at an AI agent’s disposal as well as transaction times and capabilities can help AI agents “operate safely within defined boundaries."

How Harmful Are AI’s Biases on Diverse Student Populations?
Prabha Kannan
Oct 03, 2024
News

Large language models exhibit alarming magnitudes of bias when generating stories about learners, often reinforcing harmful stereotypes 

News

How Harmful Are AI’s Biases on Diverse Student Populations?

Prabha Kannan
Education, SkillsEthics, Equity, InclusionOct 03

Large language models exhibit alarming magnitudes of bias when generating stories about learners, often reinforcing harmful stereotypes 

The Digitalist Papers: A Vision for AI and Democracy
Nick Adams Pandolfo
Sep 24, 2024
News

Stanford’s Digital Economy Lab taps multidisciplinary group of thinkers to offer insights on AI and governance in volume called The Digitalist Papers.

News

The Digitalist Papers: A Vision for AI and Democracy

Nick Adams Pandolfo
DemocracyEconomy, MarketsPrivacy, Safety, SecurityRegulation, Policy, GovernanceSep 24

Stanford’s Digital Economy Lab taps multidisciplinary group of thinkers to offer insights on AI and governance in volume called The Digitalist Papers.

Riana Pfefferkorn: At the Intersection of Technology and Civil Liberties
Shana Lynch
Sep 24, 2024
News

Stanford HAI’s new Policy Fellow will study AI’s implications for privacy and safety, and explore how we can build rights-respecting artificial intelligence.

News

Riana Pfefferkorn: At the Intersection of Technology and Civil Liberties

Shana Lynch
Law Enforcement and JusticePrivacy, Safety, SecurityRegulation, Policy, GovernanceSep 24

Stanford HAI’s new Policy Fellow will study AI’s implications for privacy and safety, and explore how we can build rights-respecting artificial intelligence.

A New Collaboration Between the Hasso Plattner Institut and HAI Brings the Human Factor of AI to the Forefront
Nikki Goth Itoi
Sep 09, 2024
Announcement
Your browser does not support the video tag.

The Hasso Plattner Institut in Potsdam, Germany, and Stanford HAI have launched a joint research program on artificial intelligence and human-computer interaction.

Announcement
Your browser does not support the video tag.

A New Collaboration Between the Hasso Plattner Institut and HAI Brings the Human Factor of AI to the Forefront

Nikki Goth Itoi
Sep 09

The Hasso Plattner Institut in Potsdam, Germany, and Stanford HAI have launched a joint research program on artificial intelligence and human-computer interaction.

19
20
21
22
23