Stanford Robotics Seminar ENGR319 | Winter 2026 | Gen Control, Action Chunking, Moravec’s Paradox
TL;DR
This seminar reframes Moravec's Paradox through control theory, demonstrating why robot learning suffers from exponential compounding errors that symbolic tasks avoid, and identifies action chunking and generative control policies as the essential algorithmic breakthroughs that enabled the 2023 inflection point in robotic manipulation capabilities.
🤖 The Algorithmic Moravec Paradox 3 insights
Pragmatic vs. algorithmic barriers
While data scarcity explains part of Moravec's Paradox, fundamental algorithmic limitations prevent learning from demonstration even with sufficient data in continuous control settings.
The 2023 inflection point
Behavior cloning achieved surprisingly capable manipulation tasks like shirt-folding, triggering industrial interest in scaling these techniques toward more ambitious applications.
Algorithmic prerequisites for scaling
Without specific algorithmic interventions, collecting more data fails to improve performance due to inherent instability in standard behavior cloning approaches.
📉 Fundamental Challenges in Continuous Control 3 insights
Exponential error accumulation
Continuous control systems suffer exponentially compounding errors over horizon length, unlike discrete symbolic tasks which accumulate errors only linearly.
Inevitable closed-loop instability
Even when experts and dynamics are perfectly stable, any smooth Markovian policy learned via standard methods necessarily induces instability in orthogonal subspaces not seen in training data.
Distribution mismatch problem
Standard squared-loss supervised learning achieves excellent training distribution fit but cannot control the rollout distribution, causing errors to compound when the policy executes independently.
🔧 Breakthrough Algorithmic Solutions 3 insights
Action chunking removes Markov constraints
Predicting sequences of future actions rather than single steps removes the Markovian restriction and correlates decisions across time to improve system stability.
Generative control captures multi-modality
Using generative models to predict action distributions captures bifurcations and multiple behavioral modes that deterministic policies cannot represent.
Reparameterizing closed-loop dynamics
These interventions effectively reparameterize the interaction between robot and learner, shifting the problem into a regime where the 'bitter lesson' of data scaling becomes effective.
Bottom Line
To overcome Moravec's Paradox in robotics, practitioners must implement action chunking and generative control policies, as standard behavior cloning inevitably induces exponential compounding errors regardless of dataset size.
More from Stanford Online
View all
Stanford CS547 HCI Seminar | Winter 2026 | Computational Ecosystems
The speaker argues that to solve persistent human problems in HCI, designers must move beyond building better tools and instead critically reimagine entire socio-technical ecosystems. Through examples in event planning, crowdsourcing, social connection, and education, he demonstrates how redesigning human practices—what he terms "critical technical practice"—can unlock values that pure technological advancement has failed to address.
Stanford AA228V I Validation of Safety Critical Systems I Explainability
This lecture covers Project 3 results on reachability analysis before introducing explainability methods for safety-critical AI systems, focusing on how to attribute failures to specific time steps using Shapley values from game theory when simple ablation studies fail due to correlated noise patterns.
AI in Healthcare: Why Hospitals Are Moving Cautiously Toward Consolidation
Healthcare AI adoption is consolidating around Epic's EHR platform while patients increasingly turn to consumer AI tools due to access shortages, creating tension between democratized information and safety risks as laypeople lack expertise to evaluate medical advice.
Stanford CS221 | Autumn 2025 | Lecture 20: Fireside Chat, Conclusion
Percy Liang reflects on AI's transformation from academic curiosity to global infrastructure, debunking sci-fi misconceptions about capabilities while arguing that academia's role in long-term research and critical evaluation remains essential as the job market shifts away from traditional entry-level software engineering.