Stanford AA228 Decision Making Under Uncertainty | Autumn 2025 | Offline Belief State Planning
TL;DR
This lecture covers approximate offline methods for solving POMDPs when exact solutions are computationally impossible, focusing on QMDP—a technique that solves the problem as a fully observable MDP and executes policies by weighting pre-computed Q-values with current belief states represented as alpha vectors.
🧮 The Intractability of Exact POMDP Solutions 2 insights
Exact value iteration explodes exponentially
For a small POMDP with just two actions and observations, exact methods must track up to 10^338 conditional plans for a 10-step horizon, making optimal solutions impossible for moderate problem sizes.
Belief space MDPs face continuous state spaces
Formulating POMDPs as belief state MDPs results in continuous state spaces that are computationally prohibitive to solve exactly using standard techniques.
🎯 The QMDP Approximation Method 3 insights
Solve as MDP, execute with belief weighting
QMDP ignores partial observability during planning to compute standard MDP Q-values, then selects actions by taking the belief-weighted average of these values during execution.
Deployed in ACAS X collision avoidance
This computationally efficient approach powers the ACAS X aircraft collision avoidance system, where it prevents mid-air collisions by interpolating pre-computed state values based on current belief distributions.
One alpha vector per action representation
QMDP can be viewed as maintaining a single alpha vector for each action, where each vector entry stores the expected utility for a specific state under that action.
📐 Alpha Vector Fundamentals 2 insights
Alpha vectors store state-conditional utilities
Each alpha vector contains one entry per state representing the expected future utility if the agent were actually in that particular state and followed the associated policy.
Value function as belief-weighted sum
The estimated value of a belief state is computed by taking the dot product between the belief probability distribution and the relevant alpha vector entries.
Bottom Line
For real-world POMDPs where exact solutions are intractable, compute QMDP policies offline by solving the fully observable MDP and representing each action's values as an alpha vector, then execute by selecting actions that maximize the belief-weighted dot product.
More from Stanford Online
View all
Stanford CS153 Frontier Systems | Nikhyl Singhal from Skip on Product Management in the AI Era
Nikhyl Singhal argues that product management is evolving from manual information gathering to AI-augmented strategic judgment, requiring PMs to focus on solving genuine customer problems while leveraging AI's ability to synthesize vast customer data streams.
Stanford CS153 Frontier Systems | Amit Jain from Luma AI on Unified Intelligence Systems
Amit Jain details Luma AI's evolution from 3D capture to video generation, revealing how the company learned to build scalable world simulators by designing algorithms around data physics rather than theoretical ideals, ultimately converging on unified intelligence systems that combine language, video, and reasoning.
Stanford CS153 Frontier Systems | Andreas Blattmann from Black Forest Labs on Visual Intelligence
Andreas Blattmann, co-founder of Black Forest Labs and co-creator of Stable Diffusion, argues that visual intelligence represents the critical next frontier for AI, requiring a fundamental shift from text-centric unimodal models to multimodal systems trained on 'natural representations' (video, audio, physics) to unlock true reasoning, robotics capabilities, and higher intelligence.
Stanford CS153 Frontier Systems | Mati Staniszewski from ElevenLabs on The Future of Voice Systems
ElevenLabs CEO Mati Staniszewski explains how the company pivoted from an AI dubbing vision to perfecting text-to-speech by staying close to Discord communities, leveraging open-source research, and running lean to solve the 'one voice' dubbing problem he experienced growing up in Poland.