What are Scaling Laws? | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs

What are Scaling Laws?

Scaling Laws are predictable mathematical relationships that describe how AI model performance improves as factors like model size, training data, and computing power increase. These empirical patterns, particularly prominent in large language models, show that bigger models trained on more data with more computation tend to perform better in consistent, measurable ways. Scaling Laws help researchers forecast AI capabilities and determine optimal resource allocation for training increasingly powerful models.

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News


Scaling Laws mentioned at Stanford HAI

Explore Similar Terms:

Foundation Model | GPUs (Graphics Processing Unit) | Big Data

See Full List of Terms & Definitions

Enroll in a Human-Centered AI Course

This AI program covers technical fundamentals, business implications, and societal considerations.
Longitudinal Self-Supervised Learning
Qingyu Zhao, Zixuan Liu, Ehsan Adeli, Kilian M. Pohl
Dec 10
Research
Your browser does not support the video tag.

Longitudinal Self-Supervised Learning

Longitudinal Self-Supervised Learning

Qingyu Zhao, Zixuan Liu, Ehsan Adeli, Kilian M. Pohl
Dec 10

Longitudinal Self-Supervised Learning

Your browser does not support the video tag.
Research
Are Universal Self-Supervised Learning Algorithms Within Reach?
Andrew Myers
Jan 19
news

A new benchmarking tool helps AI scholars train algorithms that work on any domain, from images to text, video, medical images, and more — all at the same time.

Are Universal Self-Supervised Learning Algorithms Within Reach?

Andrew Myers
Jan 19

A new benchmarking tool helps AI scholars train algorithms that work on any domain, from images to text, video, medical images, and more — all at the same time.

Machine Learning
news
Could Self-Supervised Learning Be a Game-Changer for Medical Image Classification?
Katharine Miller
May 30
news

Supervised methods for training medical image models aren’t scalable. A new review highlights the potential of self-supervised learning.

Could Self-Supervised Learning Be a Game-Changer for Medical Image Classification?

Katharine Miller
May 30

Supervised methods for training medical image models aren’t scalable. A new review highlights the potential of self-supervised learning.

Healthcare
Machine Learning
news
Self-Supervised Learning Of Brain Dynamics From Broad Neuroimaging Data
Armin W. Thomas, Russell A. Poldrack
Mar 15
Research
Your browser does not support the video tag.

Self-Supervised Learning Of Brain Dynamics From Broad Neuroimaging Data

Self-Supervised Learning Of Brain Dynamics From Broad Neuroimaging Data

Armin W. Thomas, Russell A. Poldrack
Mar 15

Self-Supervised Learning Of Brain Dynamics From Broad Neuroimaging Data

Your browser does not support the video tag.
Research