Diffusion Models are a type of generative model that creates new content like images by adding and then subtracting “noise.” For example, an image generator would take a real image and slowly add random pixels until they become pure static and unrecognizable, then reverse this process to create a clear, realistic image. The technology is behind AI generators like DALL-E, Midjourney, and Stable Diffusion.
Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
Explore Similar Terms:
Generative AI | GANs (Generative Adversarial Networks) | Deep Learning

Experts from industry, academia, and government share lessons learned and outline a path forward at a Princeton-Stanford workshop.
Experts from industry, academia, and government share lessons learned and outline a path forward at a Princeton-Stanford workshop.


The image-generating model has some impressive capabilities that parallel the brain, but is it really creative?
The image-generating model has some impressive capabilities that parallel the brain, but is it really creative?


Stanford AIMI scholars found a way to generate synthetic chest X-rays by fine-tuning the open-source Stable Diffusion foundation model.
Stanford AIMI scholars found a way to generate synthetic chest X-rays by fine-tuning the open-source Stable Diffusion foundation model.


This new class of models may lead to more affordable, easily adaptable health AI.
This new class of models may lead to more affordable, easily adaptable health AI.
