Model Drift occurs when a machine learning model's performance degrades over time because the real-world data it encounters has changed from the data it was originally trained on. This happens when patterns, relationships, or distributions in the incoming data shift due to changes in user behavior, market conditions, seasonality, or other external factors. For example, a fraud detection model trained before the pandemic might perform poorly afterward because shopping patterns fundamentally changed, or a recommendation system might drift as user preferences evolve.
Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News

Scholars detail the current state of large language models in healthcare and advocate for better evaluation frameworks.
Scholars detail the current state of large language models in healthcare and advocate for better evaluation frameworks.

A new study reveals models aren’t reporting enough, leaving users blind to potential model errors such as flawed training data and calibration drift.
A new study reveals models aren’t reporting enough, leaving users blind to potential model errors such as flawed training data and calibration drift.

Two experts discuss the intersection of AI and health care, including application opportunities and implementation challenges.
Two experts discuss the intersection of AI and health care, including application opportunities and implementation challenges.


Politicians and technologists detail the major areas of negotiation and what’s at stake for the U.S.
Politicians and technologists detail the major areas of negotiation and what’s at stake for the U.S.

A new index rates the transparency of 10 foundation model companies and finds them lacking.
A new index rates the transparency of 10 foundation model companies and finds them lacking.

Combining a large language model and open-source peer-reviewed scientific papers, researchers at Stanford built a tool they hope can help other researchers polish and strengthen their drafts.
Combining a large language model and open-source peer-reviewed scientific papers, researchers at Stanford built a tool they hope can help other researchers polish and strengthen their drafts.
