Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Foundation Models | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Back to Foundation Models

All Work Published on Foundation Models

Brief Definitions of Key Terms in AI
Stanford HAI
Apr 01, 2022
Explainer

This explainer provides brief definitions for key terms associated with artificial intelligence, ranging from autonomous systems to deep learning and foundation models.

Brief Definitions of Key Terms in AI

Stanford HAI
Apr 01, 2022

This explainer provides brief definitions for key terms associated with artificial intelligence, ranging from autonomous systems to deep learning and foundation models.

Machine Learning
Foundation Models
Explainer
Big Tech Fails Transparency Test: Gary Marcus On What We Should Demand of AI
Big Think
Feb 04, 2025
Media Mention

A team of researchers from Stanford HAI, MIT, and Princeton created the Foundation Model Transparency Index, which rated the transparency of 10 AI companies; each one received a failing grade.

Big Tech Fails Transparency Test: Gary Marcus On What We Should Demand of AI

Big Think
Feb 04, 2025

A team of researchers from Stanford HAI, MIT, and Princeton created the Foundation Model Transparency Index, which rated the transparency of 10 AI companies; each one received a failing grade.

Foundation Models
Media Mention
Large Language Models Just Want To Be Liked
Jan 13, 2025
News

When LLMs take surveys on personality traits, they, like people, exhibit a desire to appear likable. 

Large Language Models Just Want To Be Liked

Jan 13, 2025

When LLMs take surveys on personality traits, they, like people, exhibit a desire to appear likable. 

Natural Language Processing
Foundation Models
Generative AI
News
Strengthening AI Accountability Through Better Third Party Evaluations
Ruth E. Appel
Nov 06, 2024
News

At a recent Stanford-MIT-Princeton workshop, experts highlight the need for legal protections, standardized evaluation practices, and better terminology to support third-party AI evaluations.

Strengthening AI Accountability Through Better Third Party Evaluations

Ruth E. Appel
Nov 06, 2024

At a recent Stanford-MIT-Princeton workshop, experts highlight the need for legal protections, standardized evaluation practices, and better terminology to support third-party AI evaluations.

Foundation Models
Law Enforcement and Justice
Privacy, Safety, Security
Regulation, Policy, Governance
News
Are Open-Source AI Models Worth The Risk?
Tech Brew
Oct 31, 2024
Media Mention

Rishi Bommasani, Society Lead at HAI's CRFM, discusses where AI is proving most dangerous, why openness is important, and how regulators are thinking about the open-close divide. 

Are Open-Source AI Models Worth The Risk?

Tech Brew
Oct 31, 2024

Rishi Bommasani, Society Lead at HAI's CRFM, discusses where AI is proving most dangerous, why openness is important, and how regulators are thinking about the open-close divide. 

Foundation Models
Media Mention
Stanford AI Model Helps Locate Racist Deeds In Santa Clara County
KQED
Oct 21, 2024
Media Mention

Stanford's RegLab, directed by HAI Senior Fellow Daniel E. Ho, developed an AI model that helped Santa Clara accelerate the process of flagging and mapping restrictive covenants. 

Stanford AI Model Helps Locate Racist Deeds In Santa Clara County

KQED
Oct 21, 2024

Stanford's RegLab, directed by HAI Senior Fellow Daniel E. Ho, developed an AI model that helped Santa Clara accelerate the process of flagging and mapping restrictive covenants. 

Government, Public Administration
Regulation, Policy, Governance
Law Enforcement and Justice
Machine Learning
Foundation Models
Media Mention
5
6
7
8
9