Architecture

What is Diffusion Model?

The AI architecture behind image generators like Midjourney and Stable Diffusion.

Definition

A diffusion model is a type of generative AI that creates images by starting with random noise and gradually "denoising" it into a coherent image, guided by a text prompt. The model is trained by adding noise to real images and learning to reverse the process. This architecture powers Midjourney, Stable Diffusion, and DALL-E.

💡 Example

When you type a prompt into Midjourney, the diffusion model starts with pure visual noise and iteratively refines it over many steps, removing noise and shaping the image to match your description until a clear, detailed image emerges.

Explore AI tools

Find tools that use diffusion model in practice.

Browse all tools → Back to glossary
What is Diffusion Model?

A diffusion model is a type of generative AI that creates images by starting with random noise and gradually "denoising" it into a coherent image, guided by a text prompt. The model is trained by adding noise to real images and learning to reverse the process. This architecture powers Midjourney, Stable Diffusion, and DALL-E.

How does Diffusion Model work in practice?

When you type a prompt into Midjourney, the diffusion model starts with pure visual noise and iteratively refines it over many steps, removing noise and shaping the image to match your description until a clear, detailed image emerges.