Weights are the numerical parameters within a neural network that determine the strength of connections between artificial neurons and ultimately shape how the model processes information. During training, these weights are continuously adjusted through algorithms like backpropagation to minimize errors and improve the model's predictions. The learned weights represent the model's "knowledge"—a trained AI model is essentially a specific configuration of billions of these weight values that encode patterns discovered from training data.
Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News

Stanford scholars respond to a federal RFC on dual use foundation models with widely available model weights, urging policymakers to consider their marginal risks.
Stanford scholars respond to a federal RFC on dual use foundation models with widely available model weights, urging policymakers to consider their marginal risks.


This brief highlights the benefits of open foundation models and calls for greater focus on their marginal risks.
This brief highlights the benefits of open foundation models and calls for greater focus on their marginal risks.


Experts from industry, academia, and government share lessons learned and outline a path forward at a Princeton-Stanford workshop.
Experts from industry, academia, and government share lessons learned and outline a path forward at a Princeton-Stanford workshop.


New research adds precision to the debate on openness in AI.
New research adds precision to the debate on openness in AI.

One nonprofit uses novel approaches to improve infant mortality, agricultural disaster, and tuberculosis diagnoses.
One nonprofit uses novel approaches to improve infant mortality, agricultural disaster, and tuberculosis diagnoses.
Scholars develop a new framework that optimizes compound AI systems by backpropagating large language model feedback.
Scholars develop a new framework that optimizes compound AI systems by backpropagating large language model feedback.