Stanford CS221 | Autumn 2025 | Lecture 15: Logic I
TL;DR
This lecture introduces logic as a formal language for knowledge representation and reasoning, contrasting it with probabilistic methods and natural language. It establishes the foundational framework of syntax, semantics, and inference rules, then dives into propositional logic's mechanics including formulas, models, and interpretation functions.
🧠 Logic in AI Context 3 insights
Historical dominance before machine learning
Logic dominated AI research before the 1990s starting when John McCarthy coined the term AI, but declined due to its inability to handle uncertainty or leverage large datasets.
Irreplaceable expressivity advantage
Unlike search or probabilistic reasoning, logic provides uniquely compact and expressive knowledge representation that enables powerful symbolic manipulation similar to algebra.
Natural language ambiguity problems
Natural language proves too slippery for reliable automated reasoning, as demonstrated by logical fallacies where transitivity fails due to linguistic ambiguity.
⚙️ Three Components of Logic 3 insights
Syntax defines valid formulas
Syntax specifies the formal rules for constructing valid expressions or sentences within the logical language, independent of their meaning.
Semantics provides interpretation
Semantics assigns meaning to syntactic expressions, determining truth values and distinguishing cases like Python 2.7 versus Python 3 evaluating 3/2 differently.
Inference rules enable derivation
Inference rules allow generation of new valid formulas from existing ones, creating a mechanism for logical reasoning and proof construction.
🔣 Propositional Logic Structure 2 insights
Atomic propositions as variables
Propositional logic builds from atomic formulas or symbols like P, Q, rain, or wet that act as variable names without inherent meaning until interpreted.
Recursive formula construction
Complex formulas are built recursively using five connectives: negation, conjunction, disjunction, implication, and biconditional equivalence.
🌍 Semantics and Models 3 insights
Models represent possible worlds
In logic, a model is a complete assignment of truth values to all propositional symbols, representing one possible state of the world among exponentially many possibilities.
Interpretation function bridges syntax and semantics
The interpretation function I(f,w) recursively evaluates a formula's parse tree against a specific model to return true or false.
Formula models define truth sets
M(f) denotes the set of all models where a formula evaluates to true, demonstrating how compact logical expressions can represent vast sets of possible world states.
Bottom Line
Logic provides a formal language for knowledge representation through three core components—syntax for structure, semantics for meaning via models and interpretation functions, and inference rules for reasoning—enabling compact expression of complex world states that natural language cannot reliably capture.
More from Stanford Online
View all
Stanford CS153 Frontier Systems | Nikhyl Singhal from Skip on Product Management in the AI Era
Nikhyl Singhal argues that product management is evolving from manual information gathering to AI-augmented strategic judgment, requiring PMs to focus on solving genuine customer problems while leveraging AI's ability to synthesize vast customer data streams.
Stanford CS153 Frontier Systems | Amit Jain from Luma AI on Unified Intelligence Systems
Amit Jain details Luma AI's evolution from 3D capture to video generation, revealing how the company learned to build scalable world simulators by designing algorithms around data physics rather than theoretical ideals, ultimately converging on unified intelligence systems that combine language, video, and reasoning.
Stanford CS153 Frontier Systems | Andreas Blattmann from Black Forest Labs on Visual Intelligence
Andreas Blattmann, co-founder of Black Forest Labs and co-creator of Stable Diffusion, argues that visual intelligence represents the critical next frontier for AI, requiring a fundamental shift from text-centric unimodal models to multimodal systems trained on 'natural representations' (video, audio, physics) to unlock true reasoning, robotics capabilities, and higher intelligence.
Stanford CS153 Frontier Systems | Mati Staniszewski from ElevenLabs on The Future of Voice Systems
ElevenLabs CEO Mati Staniszewski explains how the company pivoted from an AI dubbing vision to perfecting text-to-speech by staying close to Discord communities, leveraging open-source research, and running lean to solve the 'one voice' dubbing problem he experienced growing up in Poland.