How to code with AI agents - Advice from OpenClaw creator | Peter Steinberger and Lex Fridman

| Podcasts | February 12, 2026 | 180 Thousand views | 31:14

TL;DR

Steinberger details his evolution to an 'agentic engineering' workflow using multiple CLI-based AI agents simultaneously, arguing that mastery requires developing empathy for how agents perceive limited context while embracing imperfection and concise prompts over complex orchestration.

🖥️ The Shift to Agent-Driven Development 3 insights

Terminal-First Workflow

Transitioned from IDE-heavy work to running 7+ Cloud Code terminal windows side-by-side, using the IDE only as a diff viewer and rarely reading code directly.

Selective Code Review

Stops reading 'boring' boilerplate (data transformation, Tailwind alignment) and focuses human review only on critical architecture, database logic, and security-sensitive PRs.

Voice-Native Input

Uses voice-to-text extensively for agent prompting (reserving hands for terminal commands), having temporarily lost his voice from overuse, finding spoken language more natural than typing.

🧠 The Agentic Trap and Engineering Philosophy 3 insights

The Complexity Curve

Engineers progress from simple prompts to over-engineered multi-agent orchestration with custom workflows, then reach 'zen' by returning to short, simple prompts like 'look at these files and make these changes.'

Agentic Engineering vs Vibe Coding

Rejects 'vibe coding' as a slur implying carelessness; 'agentic engineering' requires practiced skill, treating AI like a capable junior developer who sometimes needs guidance but often has better ideas than the human.

Empathy for the Agent

Success requires understanding that agents start each session with zero context and limited context windows, necessitating guidance on where to look rather than assuming full codebase knowledge.

Modern Workflow Practices 3 insights

Always Commit Forward

Never reverts code; instead fixes issues forward with agents, maintaining a 'YOLO' approach where main is always shippable and refactors are cheap enough to do on demand.

Conversational Debugging

Treats interactions as discussions with agents, asking 'do you understand the intent?' before implementation, and stopping long-running tasks to reassess architectural friction rather than forcing solutions.

Local-First Validation

Runs CI locally before pushing rather than relying primarily on GitHub CI, prioritizing speed and iteration over traditional branch protection models.

🤝 Human-AI Collaboration Balance 3 insights

Letting Go of Perfection

Accepts that agents produce 'good enough' code that differs from personal style, comparing it to managing human engineers where micromanagement destroys velocity and morale.

Codebase Design for Agents

Structures projects using obvious, searchable naming conventions that agents can discover, rather than optimizing for human aesthetic preferences that confuse AI search patterns.

Against Full Automation

Opposes orchestrators like GitTown that attempt to automate the entire loop, comparing them to failed waterfall models; believes human vision and style require iterative, hands-on involvement.

Bottom Line

Treat AI agents like capable junior engineers with fresh perspective—provide short, clear guidance that accounts for their limited context window, accept working but imperfect solutions rather than forcing your worldview, and maintain human control over architectural vision instead of attempting full automation.

More from This Week in Startups (Jason Calacanis)

View all
The "secret sauce" of recent AI breakthroughs: Post-training with RLVR (and RLHF) | Lex Fridman
21:15
This Week in Startups (Jason Calacanis) This Week in Startups (Jason Calacanis)

The "secret sauce" of recent AI breakthroughs: Post-training with RLVR (and RLHF) | Lex Fridman

Recent AI breakthroughs in reasoning models stem from Reinforcement Learning with Verifiable Rewards (RLVR), which trains models by rewarding accurate solutions to objectively checkable problems like math and coding, enabling scalable performance gains through iterative trial-and-error rather than human preference optimization.

about 2 months ago · 10 points
Timeline to AGI: When will superhuman AI be created? | Lex Fridman Podcast
22:08
This Week in Startups (Jason Calacanis) This Week in Startups (Jason Calacanis)

Timeline to AGI: When will superhuman AI be created? | Lex Fridman Podcast

The conversation contrasts the "AI 2027" report's milestone-based path to AGI (superhuman coder → researcher → ASI by 2031) with the "jagged capabilities" view, concluding that while AI will automate significant software development tasks within months, fully autonomous research and general computer use remain distant due to specification challenges and uneven capability profiles.

about 2 months ago · 9 points
Advice for beginners in AI: How to learn and what to build | Lex Fridman Podcast
30:57
This Week in Startups (Jason Calacanis) This Week in Startups (Jason Calacanis)

Advice for beginners in AI: How to learn and what to build | Lex Fridman Podcast

Aspiring AI researchers should build small language models from scratch to master fundamentals, then specialize deeply in narrow areas like RLHF or character training, while carefully weighing the trade-offs between academia's intellectual freedom and frontier labs' high compensation but intense 996 work culture.

about 2 months ago · 10 points