Stanford Robotics Seminar ENGR319 | Winter 2026 | Robot Motion Learning w/Physics-Based PDE Priors

| Podcasts | January 16, 2026 | 4.29 Thousand views | 53:02

TL;DR

This seminar introduces a robot motion planning framework that embeds physics-based PDE priors (specifically the Eikonal equation) into neural networks, achieving real-time inference in high-dimensional spaces while eliminating the need for weeks of expert data gathering required by conventional data-driven methods.

⚠️ The Three-Way Trade-off in Robot Planning 3 insights

Optimization methods trap themselves in local minima

While optimization-based planners offer high inference and training efficiency, they fail to adapt to complex terrains and high-DOF constraints because they easily get stuck in local minima.

Classical methods suffer from slow computation

Sampling-based approaches like RRT and search-based methods like A* require no training but have prohibitively slow inference times that prevent real-time application in high-dimensional spaces.

Data-driven approaches demand massive training costs

Imitation and reinforcement learning models adapt well to complexity and infer quickly, but require weeks of expert demonstration data and expensive GPU clusters to train, making transfer to new domains impractical.

📐 Physics-Based PDE Priors 3 insights

Eikonal PDE provides mathematical structure

The method uses the Eikonal PDE—which governs optimal wavefront propagation—as a prior rather than a physics simulator, where the solution represents a value function (travel time) whose gradient indicates optimal motion direction.

Gradient matching eliminates need for expert data

The neural network is trained via gradient matching: the network's gradients are compared against the inverse of obstacle distance constraints, requiring only randomly sampled configurations rather than expert trajectories.

Surpassing numerical solver limitations

Traditional numerical solvers like Fast Marching Method scale only to 3-4 dimensions; the neural approach aims to solve the same PDE for high-DOF robots (12-15 dimensions) where numerical methods fail.

🧠 Technical Solutions for High-Dimensional Scaling 2 insights

Metric learning handles multimodal solutions

To address the Eikonal PDE's multiple valid solutions (e.g., going left or right around an obstacle), the network uses a specialized structure with latent space encoding and max pooling to enforce geodesic distance properties like symmetry and triangle inequality.

Temporal difference learning controls gradients

To prevent error accumulation between consecutive trajectory points, the model applies Temporal Difference (TD) learning to enforce Bellman's principle of optimality, regulating gradients similarly to Q-learning.

Real-World Efficiency Gains 3 insights

Sub-100-millisecond inference in complex environments

The method achieves approximately 0.07 seconds planning time for 7-DOF robots and scales to 15-DOF systems navigating narrow passages, matching or exceeding the speed of Nvidia's EmpireNet while generalizing across 300 unseen environments.

Training in minutes versus weeks

Compared to baseline methods requiring several weeks of data gathering and one week of training on eight Tesla GPUs, this approach trains in under one hour on a single RTX 3090 after only 50 minutes of data collection.

Zero-shot transfer to new domains

Because the model requires no expert demonstrations and minimal training data, it transfers immediately to new environments without retraining, solving the deployment bottleneck faced by traditional imitation learning systems.

Bottom Line

By encoding the structure of the Eikonal PDE into neural networks through gradient matching and temporal difference learning, robots can achieve real-time motion planning in high-dimensional spaces with minutes of training on a single GPU, eliminating the costly expert data requirements of conventional data-driven approaches.

More from Stanford Online

View all
Stanford CS221 | Autumn 2025 | Lecture 20: Fireside Chat, Conclusion
58:49
Stanford Online Stanford Online

Stanford CS221 | Autumn 2025 | Lecture 20: Fireside Chat, Conclusion

Percy Liang reflects on AI's transformation from academic curiosity to global infrastructure, debunking sci-fi misconceptions about capabilities while arguing that academia's role in long-term research and critical evaluation remains essential as the job market shifts away from traditional entry-level software engineering.

16 days ago · 7 points
Stanford CS221 | Autumn 2025 | Lecture 19: AI Supply Chains
1:14:36
Stanford Online Stanford Online

Stanford CS221 | Autumn 2025 | Lecture 19: AI Supply Chains

This lecture examines AI's economic impact through the lens of supply chains and organizational strategy, demonstrating why understanding compute monopolies, labor market shifts, and corporate decision-making is as critical as tracking algorithmic capabilities.

16 days ago · 7 points
Stanford CS221 | Autumn 2025 | Lecture 18: AI & Society
1:12:10
Stanford Online Stanford Online

Stanford CS221 | Autumn 2025 | Lecture 18: AI & Society

This lecture argues that AI developers bear unique ethical responsibility for societal outcomes, framing AI as a dual-use technology that requires active steering toward beneficial applications while preventing misuse and accidental harms through rigorous auditing and an ecosystem-aware approach.

16 days ago · 8 points
Stanford CS221 | Autumn 2025 | Lecture 17: Language Models
1:19:46
Stanford Online Stanford Online

Stanford CS221 | Autumn 2025 | Lecture 17: Language Models

This lecture introduces modern language models as industrial-scale systems requiring millions of dollars and trillions of tokens to train, explaining their fundamental operation as auto-regressive next-token predictors that encode language structure through massive statistical modeling.

16 days ago · 10 points