Scaling Laws are predictable mathematical relationships that describe how AI model performance improves as factors like model size, training data, and computing power increase. These empirical patterns, particularly prominent in large language models, show that bigger models trained on more data with more computation tend to perform better in consistent, measurable ways. Scaling Laws help researchers forecast AI capabilities and determine optimal resource allocation for training increasingly powerful models.
Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
Explore Similar Terms:
Foundation Model | GPUs (Graphics Processing Unit) | Big Data
Longitudinal Self-Supervised Learning

A new benchmarking tool helps AI scholars train algorithms that work on any domain, from images to text, video, medical images, and more — all at the same time.
A new benchmarking tool helps AI scholars train algorithms that work on any domain, from images to text, video, medical images, and more — all at the same time.


Supervised methods for training medical image models aren’t scalable. A new review highlights the potential of self-supervised learning.
Supervised methods for training medical image models aren’t scalable. A new review highlights the potential of self-supervised learning.

Self-Supervised Learning Of Brain Dynamics From Broad Neuroimaging Data
Self-Supervised Learning Of Brain Dynamics From Broad Neuroimaging Data