Stanford Robotics Seminar ENGR319 | Winter 2026 | 𝚿0: An Open Foundation Model

| Podcasts | March 03, 2026 | 16.1 Thousand views | 1:04:10

TL;DR

Rio Wang introduces 𝚿0 (Psi-0), an open foundation model for universal humanoid locomotion-manipulation that moves beyond fixed 18-DoF manipulation systems by leveraging scalable egocentric human data collection and unified whole-body control to address the integration challenges of mobility, dexterity, and reasoning.

🤖 The Humanoid Capability Gap 3 insights

Degrees of Freedom Mismatch

Current vision-language-action (VLA) models typically handle ~18 DoF for fixed-base manipulation, but humanoids like the Unitree G1 require ~43 DoF including legs, waist, and dexterous hands, making existing models incompatible off-the-shelf.

Integration Challenge

Despite advances in locomotion and manipulation separately, combining mobility, dexterity, and high-level reasoning in a unified pipeline remains unsolved, preventing deployment for industrial assembly or household tasks.

Real-Time Deployment Issues

Direct deployment of standard VLA models on humanoids causes action jittering and pauses due to inference delays, creating a significant training-test time gap that destabilizes control.

📹 Egocentric Data Strategy 3 insights

Human-Centric Data Source

Instead of noisy internet videos or simulation with physics gaps, the project uses egocentric human data captured via custom lightweight headsets with four cameras (two stereo front, two downward-facing).

Scalable Collection

The cap-based device enables capture of 5 hours of daily activity per person without disrupting work, having already accumulated over 1,000 hours using 20 devices with a target of 10,000 hours.

Natural Alignment

Human hand tracking provides action space alignment with humanoid hands while egocentric views match robot observation perspectives, significantly reducing the domain gap compared to third-person internet data.

🎮 Teleoperation & Dataset Infrastructure 3 insights

VR-Based Capture System

The teleoperation setup uses Pico 4U VR headsets with wrist/body motion trackers and Manus gloves to capture whole-body motion, controlling upper body via multi-target inverse kinematics and lower body via reinforcement learning policies.

High-Frequency Data Pipeline

Optimized I/O offloading reduced control delay from 500 milliseconds to 20 milliseconds, halving teleoperation time and enabling 30Hz capture of multi-modal streams including RGB, depth, tactile sensors, and IMU data.

Humanoid Everyday Dataset

The dataset features over 10,000 trajectories (3 million frames) across 260 diverse tasks including rare bipedal locomotion-manipulation combinations and human-robot interactions, collected on Unitree G1 and H1 robots.

Bottom Line

Scalable egocentric human data collection and unified VLA models that directly control full humanoid bodies (rather than just arms) are essential to bridge the gap from laboratory demonstrations to practical industrial and domestic deployment.

More from Stanford Online

View all
Stanford CS153 Frontier Systems | Andreas Blattmann from Black Forest Labs on Visual Intelligence
1:01:14
Stanford Online Stanford Online

Stanford CS153 Frontier Systems | Andreas Blattmann from Black Forest Labs on Visual Intelligence

Andreas Blattmann, co-founder of Black Forest Labs and co-creator of Stable Diffusion, argues that visual intelligence represents the critical next frontier for AI, requiring a fundamental shift from text-centric unimodal models to multimodal systems trained on 'natural representations' (video, audio, physics) to unlock true reasoning, robotics capabilities, and higher intelligence.

6 days ago · 9 points