[FULL WORKSHOP] AI Coding For Real Engineers - Matt Pocock, AI Hero (@mattpocockuk )
TL;DR
Matt Pocock demonstrates how traditional software engineering principles apply to AI coding, teaching engineers to manage LLM limitations through "smart zones," avoid "specs-to-code" traps, and use structured interrogation techniques to achieve true alignment with AI agents.
🧠 LLM Constraints & Context Windows 3 insights
Smart Zone vs. Dumb Zone Dynamics
LLMs perform optimally in early context but degrade significantly after approximately 100k tokens due to strained attention relationships.
Quadratic Scaling Strains Attention Mechanisms
Adding tokens increases attention relationships quadratically, causing inevitable performance degradation regardless of total context window capacity.
Size All Tasks to Fit Smart Zones
Break large projects into discrete chunks that complete within the high-performance window before context quality deteriorates.
🔄 Session Architecture & State Management 3 insights
LLMs Reset to Base Like Memento
Clearing context provides predictable reset behavior superior to compacting, which creates inconsistent historical sediment.
Sessions Follow Four Distinct Phases
Every interaction progresses through minimal system prompt, exploration, implementation, and testing/validation stages.
Delegate Exploration to Isolated Sub-Agents
Offload research to child agents that report summaries back, preserving the parent agent's token budget for critical implementation work.
🤝 Effective Collaboration Patterns 3 insights
Reject Vibe Coding and Specs-to-Code
Engineers must directly understand and shape code rather than iterating only on specifications while ignoring implementation details.
Grill Me Protocol Establishes Shared Understanding
Relentlessly interrogate the AI about every plan aspect to align on design concept before writing any implementation code.
Ralph Wiggum Means Iterative Small Changes
Specify the end state and loop through minimal incremental changes rather than executing rigid multi-phase plans.
Bottom Line
Treat AI coding as structured engineering by aggressively managing context window limits through sub-agents and small tasks, while using structured interrogation to establish shared understanding before implementation.
More from AI Engineer
View all
Building Generative Image & Video models at Scale - Sander Dieleman (Veo and Nano Banana)
Sander Dieleman from Google DeepMind explains the technical foundations of training large-scale generative image and video models like Veo, emphasizing that meticulous data curation and learned latent representations are as critical as the diffusion architecture itself. He details how diffusion models reverse a noise corruption process through iterative refinement rather than single-step prediction.
Full Workshop: Build Your Own Deep Research Agents - Louis-François Bouchard, Paul Iusztin, Samridhi
This workshop demonstrates how to build production-grade deep research agents by navigating the spectrum from rigid workflows to autonomous systems, emphasizing strategic context management and hybrid architectures to automate high-quality technical content creation while avoiding generic 'AI slop'.
Harness Engineering: How to Build Software When Humans Steer, Agents Execute — Ryan Lopopolo, OpenAI
OpenAI engineer Ryan Lopopolo explains how AI agents have made code generation abundant and free, requiring a fundamental shift from writing implementations to designing systems, documentation, and guardrails that enable agents to execute software engineering tasks autonomously.
AI Didn’t Kill the Web, It Moved in! — Olivier Leplus (AWS) & Yohan Lasorsa (Microsoft)
AI has permeated every stage of the web development lifecycle, from coding agents that use customizable "skills" to browser-native AI debugging tools, fundamentally shifting how developers build, test, and optimize applications.