Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Foundation Models | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Back to Foundation Models

All Work Published on Foundation Models

Chatbots, Like the Rest of Us, Just Want to Be Loved
Wired
Mar 05, 2025
Media Mention

A study led by Stanford HAI Faculty Fellow Johannes Eichstaedt reveals that large language models adapt their behavior to appear more likable when they are being studied, mirroring human tendencies to present favorably.

Chatbots, Like the Rest of Us, Just Want to Be Loved

Wired
Mar 05, 2025

A study led by Stanford HAI Faculty Fellow Johannes Eichstaedt reveals that large language models adapt their behavior to appear more likable when they are being studied, mirroring human tendencies to present favorably.

Natural Language Processing
Machine Learning
Generative AI
Foundation Models
Media Mention
Can Foundation Models Help Us Achieve Perfect Secrecy?
Simran Arora
Apr 01, 2022
Research
Your browser does not support the video tag.

A key promise of machine learning is the ability to assist users with personal tasks.

Can Foundation Models Help Us Achieve Perfect Secrecy?

Simran Arora
Apr 01, 2022

A key promise of machine learning is the ability to assist users with personal tasks.

Privacy, Safety, Security
Foundation Models
Your browser does not support the video tag.
Research
Responses to NTIA's Request for Comment on AI Accountability Policy
Rishi Bommasani, Sayash Kapoor, Daniel Zhang, Arvind Narayanan, Percy Liang, Jennifer King
Jun 14, 2023
Response to Request

Stanford scholars respond to a federal RFC on AI accountability policy issued by the National Telecommunications and Information Administration (NTIA).

Responses to NTIA's Request for Comment on AI Accountability Policy

Rishi Bommasani, Sayash Kapoor, Daniel Zhang, Arvind Narayanan, Percy Liang, Jennifer King
Jun 14, 2023

Stanford scholars respond to a federal RFC on AI accountability policy issued by the National Telecommunications and Information Administration (NTIA).

Foundation Models
Privacy, Safety, Security
Regulation, Policy, Governance
Response to Request
Holistic Evaluation of Large Language Models for Medical Applications
Nigam Shah, Mike Pfeffer, Percy Liang
Feb 28, 2025
News

Medical and AI experts build a benchmark for evaluation of LLMs grounded in real-world healthcare needs.

Holistic Evaluation of Large Language Models for Medical Applications

Nigam Shah, Mike Pfeffer, Percy Liang
Feb 28, 2025

Medical and AI experts build a benchmark for evaluation of LLMs grounded in real-world healthcare needs.

Healthcare
Foundation Models
News
Improving Transparency in AI Language Models: A Holistic Evaluation
Rishi Bommasani, Daniel Zhang, Tony Lee, Percy Liang
Quick ReadFeb 28, 2023
Issue Brief

This brief introduces Holistic Evaluation of Language Models (HELM) as a framework to evaluate commercial application of AI use cases.

Improving Transparency in AI Language Models: A Holistic Evaluation

Rishi Bommasani, Daniel Zhang, Tony Lee, Percy Liang
Quick ReadFeb 28, 2023

This brief introduces Holistic Evaluation of Language Models (HELM) as a framework to evaluate commercial application of AI use cases.

Machine Learning
Foundation Models
Issue Brief
Stanford Researchers Say AI Models Are Often Too Racially Color-Blind
TechBrew
Feb 14, 2025
Media Mention

Stanford HAI researchers develop a new benchmark suite aimed to test difference awareness in AI models. 

Stanford Researchers Say AI Models Are Often Too Racially Color-Blind

TechBrew
Feb 14, 2025

Stanford HAI researchers develop a new benchmark suite aimed to test difference awareness in AI models. 

Foundation Models
Media Mention
4
5
6
7
8