[FULL WORKSHOP] AI Coding For Real Engineers - Matt Pocock, AI Hero (@mattpocockuk )

| Podcasts | April 24, 2026 | 17.5 Thousand views | 1:36:30

TL;DR

Matt Pocock demonstrates how traditional software engineering principles apply to AI coding, teaching engineers to manage LLM limitations through "smart zones," avoid "specs-to-code" traps, and use structured interrogation techniques to achieve true alignment with AI agents.

🧠 LLM Constraints & Context Windows 3 insights

Smart Zone vs. Dumb Zone Dynamics

LLMs perform optimally in early context but degrade significantly after approximately 100k tokens due to strained attention relationships.

Quadratic Scaling Strains Attention Mechanisms

Adding tokens increases attention relationships quadratically, causing inevitable performance degradation regardless of total context window capacity.

Size All Tasks to Fit Smart Zones

Break large projects into discrete chunks that complete within the high-performance window before context quality deteriorates.

🔄 Session Architecture & State Management 3 insights

LLMs Reset to Base Like Memento

Clearing context provides predictable reset behavior superior to compacting, which creates inconsistent historical sediment.

Sessions Follow Four Distinct Phases

Every interaction progresses through minimal system prompt, exploration, implementation, and testing/validation stages.

Delegate Exploration to Isolated Sub-Agents

Offload research to child agents that report summaries back, preserving the parent agent's token budget for critical implementation work.

🤝 Effective Collaboration Patterns 3 insights

Reject Vibe Coding and Specs-to-Code

Engineers must directly understand and shape code rather than iterating only on specifications while ignoring implementation details.

Grill Me Protocol Establishes Shared Understanding

Relentlessly interrogate the AI about every plan aspect to align on design concept before writing any implementation code.

Ralph Wiggum Means Iterative Small Changes

Specify the end state and loop through minimal incremental changes rather than executing rigid multi-phase plans.

Bottom Line

Treat AI coding as structured engineering by aggressively managing context window limits through sub-agents and small tasks, while using structured interrogation to establish shared understanding before implementation.

More from AI Engineer

View all
Building Generative Image & Video models at Scale - Sander Dieleman (Veo and Nano Banana)
40:46
AI Engineer AI Engineer

Building Generative Image & Video models at Scale - Sander Dieleman (Veo and Nano Banana)

Sander Dieleman from Google DeepMind explains the technical foundations of training large-scale generative image and video models like Veo, emphasizing that meticulous data curation and learned latent representations are as critical as the diffusion architecture itself. He details how diffusion models reverse a noise corruption process through iterative refinement rather than single-step prediction.

3 days ago · 6 points