Self-Supervised Learning is a training method where an AI model teaches itself by creating its own puzzles from raw data and then trying to solve them. For instance, the model might learn language by trying to predict missing words in sentences, or learn about images by guessing which pieces belong together. This technique has become essential for training large AI models because it allows them to learn from vast amounts of data—like all the text on the internet—without requiring expensive and time-consuming human annotation.
Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
Explore Similar Terms:
Unsupervised Learning | Supervised Learning | Foundation Model
Longitudinal Self-Supervised Learning

A new benchmarking tool helps AI scholars train algorithms that work on any domain, from images to text, video, medical images, and more — all at the same time.
A new benchmarking tool helps AI scholars train algorithms that work on any domain, from images to text, video, medical images, and more — all at the same time.


Supervised methods for training medical image models aren’t scalable. A new review highlights the potential of self-supervised learning.
Supervised methods for training medical image models aren’t scalable. A new review highlights the potential of self-supervised learning.

Self-Supervised Learning Of Brain Dynamics From Broad Neuroimaging Data
Self-Supervised Learning Of Brain Dynamics From Broad Neuroimaging Data