Stanford CS221 | Autumn 2025 | Lecture 15: Logic I
TL;DR
This lecture introduces logic as a formal language for knowledge representation and reasoning, contrasting it with probabilistic methods and natural language. It establishes the foundational framework of syntax, semantics, and inference rules, then dives into propositional logic's mechanics including formulas, models, and interpretation functions.
🧠 Logic in AI Context 3 insights
Historical dominance before machine learning
Logic dominated AI research before the 1990s starting when John McCarthy coined the term AI, but declined due to its inability to handle uncertainty or leverage large datasets.
Irreplaceable expressivity advantage
Unlike search or probabilistic reasoning, logic provides uniquely compact and expressive knowledge representation that enables powerful symbolic manipulation similar to algebra.
Natural language ambiguity problems
Natural language proves too slippery for reliable automated reasoning, as demonstrated by logical fallacies where transitivity fails due to linguistic ambiguity.
⚙️ Three Components of Logic 3 insights
Syntax defines valid formulas
Syntax specifies the formal rules for constructing valid expressions or sentences within the logical language, independent of their meaning.
Semantics provides interpretation
Semantics assigns meaning to syntactic expressions, determining truth values and distinguishing cases like Python 2.7 versus Python 3 evaluating 3/2 differently.
Inference rules enable derivation
Inference rules allow generation of new valid formulas from existing ones, creating a mechanism for logical reasoning and proof construction.
🔣 Propositional Logic Structure 2 insights
Atomic propositions as variables
Propositional logic builds from atomic formulas or symbols like P, Q, rain, or wet that act as variable names without inherent meaning until interpreted.
Recursive formula construction
Complex formulas are built recursively using five connectives: negation, conjunction, disjunction, implication, and biconditional equivalence.
🌍 Semantics and Models 3 insights
Models represent possible worlds
In logic, a model is a complete assignment of truth values to all propositional symbols, representing one possible state of the world among exponentially many possibilities.
Interpretation function bridges syntax and semantics
The interpretation function I(f,w) recursively evaluates a formula's parse tree against a specific model to return true or false.
Formula models define truth sets
M(f) denotes the set of all models where a formula evaluates to true, demonstrating how compact logical expressions can represent vast sets of possible world states.
Bottom Line
Logic provides a formal language for knowledge representation through three core components—syntax for structure, semantics for meaning via models and interpretation functions, and inference rules for reasoning—enabling compact expression of complex world states that natural language cannot reliably capture.
More from Stanford Online
View all
Stanford CS221 | Autumn 2025 | Lecture 20: Fireside Chat, Conclusion
Percy Liang reflects on AI's transformation from academic curiosity to global infrastructure, debunking sci-fi misconceptions about capabilities while arguing that academia's role in long-term research and critical evaluation remains essential as the job market shifts away from traditional entry-level software engineering.
Stanford CS221 | Autumn 2025 | Lecture 19: AI Supply Chains
This lecture examines AI's economic impact through the lens of supply chains and organizational strategy, demonstrating why understanding compute monopolies, labor market shifts, and corporate decision-making is as critical as tracking algorithmic capabilities.
Stanford CS221 | Autumn 2025 | Lecture 18: AI & Society
This lecture argues that AI developers bear unique ethical responsibility for societal outcomes, framing AI as a dual-use technology that requires active steering toward beneficial applications while preventing misuse and accidental harms through rigorous auditing and an ecosystem-aware approach.
Stanford CS221 | Autumn 2025 | Lecture 17: Language Models
This lecture introduces modern language models as industrial-scale systems requiring millions of dollars and trillions of tokens to train, explaining their fundamental operation as auto-regressive next-token predictors that encode language structure through massive statistical modeling.