What are Generative Adversarial Networks (GANs)? | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs

What are Generative Adversarial Networks (GANs)?

GANs (Generative Adversarial Networks) are a type of AI architecture consisting of two neural networks - a “generator” and a “discriminator” - that compete against each other in a process to create realistic synthetic data. The generator tries to create fake data (like images, audio, or text) that appears real, while the discriminator tries to distinguish between real and generated samples. Through this adversarial training, the generator progressively improves at creating increasingly convincing outputs until the discriminator can no longer tell the difference.

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News


GANs mentioned at Stanford HAI

Explore Similar Terms:

Generative AI | Diffusion Models | Deep Learning

See Full List of Terms & Definitions

Preparing for the Age of Deepfakes and Disinformation
Dan Boneh, Andrew J. Grotto, Patrick McDaniel, Nicolas Papernot
Quick ReadNov 01
policy brief

This brief warns of the dangers of generative adversarial networks that can make realistic deepfakes, calling for comprehensive norms, regulations, and laws to counter AI-driven disinformation.

Preparing for the Age of Deepfakes and Disinformation

Dan Boneh, Andrew J. Grotto, Patrick McDaniel, Nicolas Papernot
Quick ReadNov 01

This brief warns of the dangers of generative adversarial networks that can make realistic deepfakes, calling for comprehensive norms, regulations, and laws to counter AI-driven disinformation.

Communications, Media
Privacy, Safety, Security
policy brief

Enroll in a Human-Centered AI Course

This AI program covers technical fundamentals, business implications, and societal considerations.