Foundation Models are large-scale AI models, often transformers, trained on vast amounts of broad, diverse data that can be adapted to perform a wide variety of downstream tasks, serving as a "foundation" for many different applications. The term emphasizes both their scale (typically billions of parameters) and their role as a single, multipurpose base that can power numerous AI applications.
Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
Explore Similar Terms:
Large Language Model (LLM) | Transfer Learning | Transformer

New research adds precision to the debate on openness in AI.
New research adds precision to the debate on openness in AI.

A new index rates the transparency of 10 foundation model companies and finds them lacking.
A new index rates the transparency of 10 foundation model companies and finds them lacking.

This new class of models may lead to more affordable, easily adaptable health AI.
This new class of models may lead to more affordable, easily adaptable health AI.


A new study explores how to apply machine learning to digital assistants in a way that could better protect our data.
A new study explores how to apply machine learning to digital assistants in a way that could better protect our data.


Stanford scholars respond to a federal RFC on dual use foundation models with widely available model weights, urging policymakers to consider their marginal risks.
Stanford scholars respond to a federal RFC on dual use foundation models with widely available model weights, urging policymakers to consider their marginal risks.


Researchers show that ChatGPT can be jailbroken with only 20 cents, but they are working on making this more difficult with “self-destructing models.”
Researchers show that ChatGPT can be jailbroken with only 20 cents, but they are working on making this more difficult with “self-destructing models.”


This brief warns that fair use may not fully shield U.S. foundation models trained on copyrighted data and calls for combined legal and technical safeguards to protect creators.
This brief warns that fair use may not fully shield U.S. foundation models trained on copyrighted data and calls for combined legal and technical safeguards to protect creators.
