Stanford AA228V I Validation of Safety Critical Systems I Explainability
TL;DR
This lecture covers Project 3 results on reachability analysis before introducing explainability methods for safety-critical AI systems, focusing on how to attribute failures to specific time steps using Shapley values from game theory when simple ablation studies fail due to correlated noise patterns.
🏆 Project 3 Results & Verification Techniques 3 insights
AI-squared dominance on large systems
Top leaderboard performers achieved tightly clustered scores (0.70-0.72) using AI-squared verification techniques for large-scale systems, significantly outperforming other approaches.
Advanced geometric methods for small systems
Winning solutions employed zonotopes and PCA-aligned rectangles rather than simple axis-aligned box approximations, capturing more accurate reachable sets.
Second-order Taylor expansions improve accuracy
For medium systems, utilizing Hessian matrices for second-order Taylor expansion provided measurable performance gains over first-order linearization methods.
⚠️ The Safety-Critical Failure Scenario 2 insights
Post-incident stakeholder pressure
Chief engineers at companies like Waymo or aviation firms face intense scrutiny following rare catastrophic failures after thousands of successful operating hours, requiring immediate explanations to CEOs, investors, and regulators.
Three critical post-failure questions
Engineers must definitively answer why the specific failure occurred, what system or dataset modifications will prevent recurrence, and how to formally guarantee to stakeholders that the issue is resolved.
⏱️ Temporal Root Cause Analysis 2 insights
Limitations of leave-one-out analysis
Simple ablation studies that zero out individual noise variables at specific time steps often fail to identify failure causes because catastrophic outcomes frequently stem from correlated patterns across multiple consecutive steps.
Group-based noise attribution required
Analyzing groups of time steps rather than isolated events is necessary to detect redundancy and synergy effects in noise sequences that drive systems into failure regimes.
🎲 Shapley Values for Rigorous Attribution 3 insights
Game theory foundations for ML explainability
Shapley values from 1950s cooperative game theory provide a mathematically rigorous framework to attribute system failures to specific input features by averaging performance across all possible subsets of variables.
Handling redundancy and synergy
Unlike simple ablation, Shapley values correctly account for scenarios where multiple noise variables are redundant or exhibit synergy, providing precise numerical attribution for each variable's contribution to failure.
Computational challenges in long trajectories
Applying Shapley values to safety-critical trajectories with 40+ time steps presents significant computational challenges due to the combinatorial explosion of subset evaluations required by the method.
Bottom Line
Implement Shapley value analysis to rigorously attribute failures to specific correlated noise patterns across time steps, enabling targeted system modifications and verifiable guarantees to stakeholders.
More from Stanford Online
View all
Stanford Robotics Seminar ENGR319 | Winter 2026 | Gen Control, Action Chunking, Moravec’s Paradox
This seminar reframes Moravec's Paradox through control theory, demonstrating why robot learning suffers from exponential compounding errors that symbolic tasks avoid, and identifies action chunking and generative control policies as the essential algorithmic breakthroughs that enabled the 2023 inflection point in robotic manipulation capabilities.
Stanford CS547 HCI Seminar | Winter 2026 | Computational Ecosystems
The speaker argues that to solve persistent human problems in HCI, designers must move beyond building better tools and instead critically reimagine entire socio-technical ecosystems. Through examples in event planning, crowdsourcing, social connection, and education, he demonstrates how redesigning human practices—what he terms "critical technical practice"—can unlock values that pure technological advancement has failed to address.
AI in Healthcare: Why Hospitals Are Moving Cautiously Toward Consolidation
Healthcare AI adoption is consolidating around Epic's EHR platform while patients increasingly turn to consumer AI tools due to access shortages, creating tension between democratized information and safety risks as laypeople lack expertise to evaluate medical advice.
Stanford CS221 | Autumn 2025 | Lecture 20: Fireside Chat, Conclusion
Percy Liang reflects on AI's transformation from academic curiosity to global infrastructure, debunking sci-fi misconceptions about capabilities while arguing that academia's role in long-term research and critical evaluation remains essential as the job market shifts away from traditional entry-level software engineering.