Stanford CS221 | Autumn 2025 | Lecture 13: Bayesian Networks and Gibbs Sampling

| Podcasts | March 09, 2026 | 330 views | 1:15:54

TL;DR

This lecture explains how Bayesian networks compactly represent joint probability distributions through local conditional probabilities, then contrasts inefficient rejection sampling with Gibbs sampling—an MCMC method that iteratively modifies existing samples to satisfy evidence, enabling efficient approximate inference even with rare events.

🌐 Bayesian Network Representation 3 insights

Joint distribution as product of local conditionals

The joint probability over all variables equals the product of each node's conditional probability given its parents, such as P(B,E,A) = P(B)P(E)P(A|B,E) where P(B=1)=0.05 and P(E=1)=0.05, avoiding the need to specify 2^100 entries for 100 variables.

Graph structure encodes dependencies

Directed edges represent direct dependencies—such as Alarm (A) depending on Burglary (B) and Earthquake (E)—while lack of edges indicates conditional independence assumptions that make the representation compact.

Exact inference via conditioning and marginalization

To compute P(B|A=1)=0.51, slice the joint distribution to evidence (A=1), marginalize out irrelevant variables (E) by summing, and normalize by dividing by P(A=1).

Rejection Sampling Limitations 3 insights

Simple but wasteful sampling mechanism

Rejection sampling generates independent samples from the probabilistic program and discards any where evidence fails (e.g., A≠1), using only matching samples to estimate query probabilities like P(B|A=1)≈0.44 with 300 samples.

Catastrophic inefficiency with rare evidence

When evidence has exponentially small probability (e.g., 10^-30), the algorithm rejects nearly all samples, requiring prohibitive computation time to obtain sufficient valid observations.

Evidence-blind generation

Samples are generated without considering the target evidence, making the algorithm 'blind' to constraints until the rejection check occurs.

🔄 Gibbs Sampling Strategy 3 insights

MCMC with dependent samples

Gibbs sampling generates a Markov chain where each sample depends on the previous one, starting with an arbitrary assignment that satisfies the evidence and avoiding rejection entirely.

Iterative single-variable updates

The algorithm cycles through variables, resampling each one conditioned on the current values of all other variables (its Markov blanket), gradually converging to the target conditional distribution.

Efficiency trade-off

While samples are correlated—requiring more samples than independent draws for equivalent accuracy—the method avoids exponential rejection rates, making it practical for rare evidence.

☎️ Telephone Game Example 2 insights

Modeling noisy communication chains

A 3-node chain (A→B→C) represents message passing where each hop preserves the bit with probability 0.8 and flips it with probability 0.2.

Inference with rare evidence

Given the final recipient hears C=1, Gibbs sampling can efficiently estimate P(A=1|C=1)≈0.65 without rejecting samples, whereas rejection sampling struggles if C=1 is unlikely.

Bottom Line

Use Gibbs sampling to iteratively adjust existing samples rather than generating from scratch, trading sample independence for the ability to perform efficient approximate inference even when conditioning on rare evidence.

More from Stanford Online

View all
Stanford CS221 | Autumn 2025 | Lecture 20: Fireside Chat, Conclusion
58:49
Stanford Online Stanford Online

Stanford CS221 | Autumn 2025 | Lecture 20: Fireside Chat, Conclusion

Percy Liang reflects on AI's transformation from academic curiosity to global infrastructure, debunking sci-fi misconceptions about capabilities while arguing that academia's role in long-term research and critical evaluation remains essential as the job market shifts away from traditional entry-level software engineering.

16 days ago · 7 points
Stanford CS221 | Autumn 2025 | Lecture 19: AI Supply Chains
1:14:36
Stanford Online Stanford Online

Stanford CS221 | Autumn 2025 | Lecture 19: AI Supply Chains

This lecture examines AI's economic impact through the lens of supply chains and organizational strategy, demonstrating why understanding compute monopolies, labor market shifts, and corporate decision-making is as critical as tracking algorithmic capabilities.

16 days ago · 7 points
Stanford CS221 | Autumn 2025 | Lecture 18: AI & Society
1:12:10
Stanford Online Stanford Online

Stanford CS221 | Autumn 2025 | Lecture 18: AI & Society

This lecture argues that AI developers bear unique ethical responsibility for societal outcomes, framing AI as a dual-use technology that requires active steering toward beneficial applications while preventing misuse and accidental harms through rigorous auditing and an ecosystem-aware approach.

16 days ago · 8 points
Stanford CS221 | Autumn 2025 | Lecture 17: Language Models
1:19:46
Stanford Online Stanford Online

Stanford CS221 | Autumn 2025 | Lecture 17: Language Models

This lecture introduces modern language models as industrial-scale systems requiring millions of dollars and trillions of tokens to train, explaining their fundamental operation as auto-regressive next-token predictors that encode language structure through massive statistical modeling.

16 days ago · 10 points