Stanford CS221 | Autumn 2025 | Lecture 16: Logic II

| Podcasts | March 09, 2026 | 749 views | 1:15:47

TL;DR

This lecture introduces First Order Logic as a powerful extension of propositional logic that uses objects, predicates, functions, and quantifiers to compactly represent complex relationships and generalizations without enumerating every possible instance.

⚠️ Limitations of Propositional Logic 2 insights

Scalability breakdown with generalizations

Representing "all students know arithmetic" requires manually listing every student (Alice, Bob, etc.), creating exponentially large formulas that are impractical for real-world domains.

Missing structural elements

Propositional logic treats statements like "Alice knows arithmetic" as monolithic symbols, lacking the machinery to represent internal objects, relationships, or variables needed for abstraction.

🏗️ First Order Logic Syntax 3 insights

Terms denote objects, formulas denote truth

Terms (constants like Alice, variables like x, functions like father(x)) represent objects, while formulas (predicates applied to terms) evaluate to true or false.

Predicates vs. functions

Functions map terms to other terms (father(Alice) returns an object), whereas predicates map terms to boolean values (Student(Alice) returns true/false).

Strict type conventions

Lowercase conventions indicate terms representing objects, while uppercase indicates predicates and formulas representing truth values, preventing ill-formed expressions like Student(Arithmetic).

Quantifiers and Expressive Power 3 insights

Universal and existential quantification

Quantifiers (∀ for all, ∃ for exists) allow expressing general rules like "for all x, Student(x) implies Person(x)" without enumerating every individual in the domain.

Compact complex representations

First order logic enables concise encoding of mathematical conjectures and real-world facts (e.g., Goldbach's conjecture) that would require infinite propositional symbols otherwise.

Inference foundations remain consistent

First order logic maintains the same semantic framework as propositional logic—models, interpretation functions, entailment, and satisfiability—while extending representational capacity.

Bottom Line

First order logic provides the essential machinery—quantifiers, predicates, and a strict separation between terms (objects) and formulas (truth values)—to represent complex world knowledge compactly and perform scalable automated reasoning.

More from Stanford Online

View all
Stanford CS221 | Autumn 2025 | Lecture 20: Fireside Chat, Conclusion
58:49
Stanford Online Stanford Online

Stanford CS221 | Autumn 2025 | Lecture 20: Fireside Chat, Conclusion

Percy Liang reflects on AI's transformation from academic curiosity to global infrastructure, debunking sci-fi misconceptions about capabilities while arguing that academia's role in long-term research and critical evaluation remains essential as the job market shifts away from traditional entry-level software engineering.

16 days ago · 7 points
Stanford CS221 | Autumn 2025 | Lecture 19: AI Supply Chains
1:14:36
Stanford Online Stanford Online

Stanford CS221 | Autumn 2025 | Lecture 19: AI Supply Chains

This lecture examines AI's economic impact through the lens of supply chains and organizational strategy, demonstrating why understanding compute monopolies, labor market shifts, and corporate decision-making is as critical as tracking algorithmic capabilities.

16 days ago · 7 points
Stanford CS221 | Autumn 2025 | Lecture 18: AI & Society
1:12:10
Stanford Online Stanford Online

Stanford CS221 | Autumn 2025 | Lecture 18: AI & Society

This lecture argues that AI developers bear unique ethical responsibility for societal outcomes, framing AI as a dual-use technology that requires active steering toward beneficial applications while preventing misuse and accidental harms through rigorous auditing and an ecosystem-aware approach.

16 days ago · 8 points
Stanford CS221 | Autumn 2025 | Lecture 17: Language Models
1:19:46
Stanford Online Stanford Online

Stanford CS221 | Autumn 2025 | Lecture 17: Language Models

This lecture introduces modern language models as industrial-scale systems requiring millions of dollars and trillions of tokens to train, explaining their fundamental operation as auto-regressive next-token predictors that encode language structure through massive statistical modeling.

16 days ago · 10 points