Stanford CS221 | Autumn 2025 | Lecture 13: Bayesian Networks and Gibbs Sampling
TL;DR
This lecture explains how Bayesian networks compactly represent joint probability distributions through local conditional probabilities, then contrasts inefficient rejection sampling with Gibbs sampling—an MCMC method that iteratively modifies existing samples to satisfy evidence, enabling efficient approximate inference even with rare events.
🌐 Bayesian Network Representation 3 insights
Joint distribution as product of local conditionals
The joint probability over all variables equals the product of each node's conditional probability given its parents, such as P(B,E,A) = P(B)P(E)P(A|B,E) where P(B=1)=0.05 and P(E=1)=0.05, avoiding the need to specify 2^100 entries for 100 variables.
Graph structure encodes dependencies
Directed edges represent direct dependencies—such as Alarm (A) depending on Burglary (B) and Earthquake (E)—while lack of edges indicates conditional independence assumptions that make the representation compact.
Exact inference via conditioning and marginalization
To compute P(B|A=1)=0.51, slice the joint distribution to evidence (A=1), marginalize out irrelevant variables (E) by summing, and normalize by dividing by P(A=1).
❌ Rejection Sampling Limitations 3 insights
Simple but wasteful sampling mechanism
Rejection sampling generates independent samples from the probabilistic program and discards any where evidence fails (e.g., A≠1), using only matching samples to estimate query probabilities like P(B|A=1)≈0.44 with 300 samples.
Catastrophic inefficiency with rare evidence
When evidence has exponentially small probability (e.g., 10^-30), the algorithm rejects nearly all samples, requiring prohibitive computation time to obtain sufficient valid observations.
Evidence-blind generation
Samples are generated without considering the target evidence, making the algorithm 'blind' to constraints until the rejection check occurs.
🔄 Gibbs Sampling Strategy 3 insights
MCMC with dependent samples
Gibbs sampling generates a Markov chain where each sample depends on the previous one, starting with an arbitrary assignment that satisfies the evidence and avoiding rejection entirely.
Iterative single-variable updates
The algorithm cycles through variables, resampling each one conditioned on the current values of all other variables (its Markov blanket), gradually converging to the target conditional distribution.
Efficiency trade-off
While samples are correlated—requiring more samples than independent draws for equivalent accuracy—the method avoids exponential rejection rates, making it practical for rare evidence.
☎️ Telephone Game Example 2 insights
Modeling noisy communication chains
A 3-node chain (A→B→C) represents message passing where each hop preserves the bit with probability 0.8 and flips it with probability 0.2.
Inference with rare evidence
Given the final recipient hears C=1, Gibbs sampling can efficiently estimate P(A=1|C=1)≈0.65 without rejecting samples, whereas rejection sampling struggles if C=1 is unlikely.
Bottom Line
Use Gibbs sampling to iteratively adjust existing samples rather than generating from scratch, trading sample independence for the ability to perform efficient approximate inference even when conditioning on rare evidence.
More from Stanford Online
View all
Stanford CS153 Frontier Systems | Nikhyl Singhal from Skip on Product Management in the AI Era
Nikhyl Singhal argues that product management is evolving from manual information gathering to AI-augmented strategic judgment, requiring PMs to focus on solving genuine customer problems while leveraging AI's ability to synthesize vast customer data streams.
Stanford CS153 Frontier Systems | Amit Jain from Luma AI on Unified Intelligence Systems
Amit Jain details Luma AI's evolution from 3D capture to video generation, revealing how the company learned to build scalable world simulators by designing algorithms around data physics rather than theoretical ideals, ultimately converging on unified intelligence systems that combine language, video, and reasoning.
Stanford CS153 Frontier Systems | Andreas Blattmann from Black Forest Labs on Visual Intelligence
Andreas Blattmann, co-founder of Black Forest Labs and co-creator of Stable Diffusion, argues that visual intelligence represents the critical next frontier for AI, requiring a fundamental shift from text-centric unimodal models to multimodal systems trained on 'natural representations' (video, audio, physics) to unlock true reasoning, robotics capabilities, and higher intelligence.
Stanford CS153 Frontier Systems | Mati Staniszewski from ElevenLabs on The Future of Voice Systems
ElevenLabs CEO Mati Staniszewski explains how the company pivoted from an AI dubbing vision to perfecting text-to-speech by staying close to Discord communities, leveraging open-source research, and running lean to solve the 'one voice' dubbing problem he experienced growing up in Poland.