Stanford CS221 | Autumn 2025 | Lecture 15: Logic I

| Podcasts | March 09, 2026 | 949 views | 1:13:26

TL;DR

This lecture introduces logic as a formal language for knowledge representation and reasoning, contrasting it with probabilistic methods and natural language. It establishes the foundational framework of syntax, semantics, and inference rules, then dives into propositional logic's mechanics including formulas, models, and interpretation functions.

🧠 Logic in AI Context 3 insights

Historical dominance before machine learning

Logic dominated AI research before the 1990s starting when John McCarthy coined the term AI, but declined due to its inability to handle uncertainty or leverage large datasets.

Irreplaceable expressivity advantage

Unlike search or probabilistic reasoning, logic provides uniquely compact and expressive knowledge representation that enables powerful symbolic manipulation similar to algebra.

Natural language ambiguity problems

Natural language proves too slippery for reliable automated reasoning, as demonstrated by logical fallacies where transitivity fails due to linguistic ambiguity.

⚙️ Three Components of Logic 3 insights

Syntax defines valid formulas

Syntax specifies the formal rules for constructing valid expressions or sentences within the logical language, independent of their meaning.

Semantics provides interpretation

Semantics assigns meaning to syntactic expressions, determining truth values and distinguishing cases like Python 2.7 versus Python 3 evaluating 3/2 differently.

Inference rules enable derivation

Inference rules allow generation of new valid formulas from existing ones, creating a mechanism for logical reasoning and proof construction.

🔣 Propositional Logic Structure 2 insights

Atomic propositions as variables

Propositional logic builds from atomic formulas or symbols like P, Q, rain, or wet that act as variable names without inherent meaning until interpreted.

Recursive formula construction

Complex formulas are built recursively using five connectives: negation, conjunction, disjunction, implication, and biconditional equivalence.

🌍 Semantics and Models 3 insights

Models represent possible worlds

In logic, a model is a complete assignment of truth values to all propositional symbols, representing one possible state of the world among exponentially many possibilities.

Interpretation function bridges syntax and semantics

The interpretation function I(f,w) recursively evaluates a formula's parse tree against a specific model to return true or false.

Formula models define truth sets

M(f) denotes the set of all models where a formula evaluates to true, demonstrating how compact logical expressions can represent vast sets of possible world states.

Bottom Line

Logic provides a formal language for knowledge representation through three core components—syntax for structure, semantics for meaning via models and interpretation functions, and inference rules for reasoning—enabling compact expression of complex world states that natural language cannot reliably capture.

More from Stanford Online

View all
Stanford CS153 Frontier Systems | Andreas Blattmann from Black Forest Labs on Visual Intelligence
1:01:14
Stanford Online Stanford Online

Stanford CS153 Frontier Systems | Andreas Blattmann from Black Forest Labs on Visual Intelligence

Andreas Blattmann, co-founder of Black Forest Labs and co-creator of Stable Diffusion, argues that visual intelligence represents the critical next frontier for AI, requiring a fundamental shift from text-centric unimodal models to multimodal systems trained on 'natural representations' (video, audio, physics) to unlock true reasoning, robotics capabilities, and higher intelligence.

5 days ago · 9 points