Stanford CME296 Diffusion & Large Vision Models | Spring 2026 | Lecture 4 - Latent Space & Guidance

| Podcasts | April 28, 2026 | 526 views | 1:40:58

TL;DR

This lecture explains why high-dimensional pixel space (approximately 1 million dimensions for standard images) is computationally intractable for diffusion models, and how Variational Autoencoders (VAEs) solve this by compressing images into structured latent spaces that follow standard normal distributions, enabling efficient and meaningful generation.

🚫 The Problem with Pixel Space 3 insights

Standard images require millions of dimensions

A 1024×1024 RGB image occupies roughly 10^6 dimensions, creating computational bottlenecks for diffusion models while containing redundant local pixel neighborhoods that waste representational capacity.

Pixel noise produces meaningless variations

Adding noise in pixel space generates invalid images rather than semantic changes, resulting in sparse 'spiky' probability distributions instead of the smooth, clustered densities required for effective generation.

Global structure differs from texture details

Semantic similarity refers to overall geometry and composition (e.g., two teddy bears reading), while perceptual similarity captures low-level textures that make images appear identical to human observers despite pixel differences.

🏗️ Autoencoders: Compression Without Structure 2 insights

Autoencoders compress via spatial downsampling

Using convolutions and pooling, encoders reduce image dimensions by typical ratios of 8×, creating bottleneck latent representations that decoders attempt to reconstruct via upsampling operations.

Reconstruction alone lacks semantic continuity

Optimizing solely for pixel-perfect reconstruction yields discontinuous latent spaces where similar images may map to distant regions, making them incompatible with diffusion processes that require sampling from standard normal distributions.

🎲 Variational Autoencoders: Enforcing Structure 2 insights

VAEs encode images as probability distributions

Rather than deterministic vectors, Variational Autoencoders predict mean and variance parameters, sampling latent representations from learned distributions per image to create probabilistic mappings.

Standard normal prior enforces smooth geometry

By regularizing latent distributions to approximate a standard normal distribution (the 'prior'), VAEs create continuous, structured spaces where proximity indicates semantic similarity, enabling diffusion models to learn meaningful probability flows.

Bottom Line

To make diffusion models computationally feasible and semantically controllable, images must be compressed into low-dimensional latent spaces using Variational Autoencoders that enforce a standard normal distribution, ensuring smooth probability flows from noise to meaningful outputs.

More from Stanford Online

View all