Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs

What is Federated Learning?

Federated Learning is a machine learning approach where a model is trained across multiple decentralized devices or servers that hold local data, without the data ever leaving its original location. Instead of collecting all data in one central location, each device trains the model on its own data and only shares model updates (like adjusted weights) with a central server, which aggregates these updates to improve the global model. The approach improves privacy and security by keeping sensitive data (think health care records, financial information) localized.

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News


Federated Learning mentioned at Stanford HAI

Explore Similar Terms:

Machine Learning (ML) | Training Data | Ethical AI

See Full List of Terms & Definitions

Should AI Models Be Explainable? That depends.
Katharine Miller
Mar 16
news

A Stanford researcher advocates for clarity about the different types of interpretability and the contexts in which it is useful.

Should AI Models Be Explainable? That depends.

Katharine Miller
Mar 16

A Stanford researcher advocates for clarity about the different types of interpretability and the contexts in which it is useful.

Machine Learning
news
AI Overreliance Is a Problem. Are Explanations a Solution?
Katharine Miller
Mar 13
news

Stanford researchers show that shifting the cognitive costs and benefits of engaging with AI explanations could result in fewer erroneous decisions due to AI overreliance.

AI Overreliance Is a Problem. Are Explanations a Solution?

Katharine Miller
Mar 13

Stanford researchers show that shifting the cognitive costs and benefits of engaging with AI explanations could result in fewer erroneous decisions due to AI overreliance.

Design, Human-Computer Interaction
news

Enroll in a Human-Centered AI Course

This AI program covers technical fundamentals, business implications, and societal considerations.