Building your own software factory — Eric Zakariasson, Cursor

| Podcasts | April 28, 2026 | 7.21 Thousand views

TL;DR

Eric Zakariasson from Cursor details the roadmap to building autonomous "software factories," outlining six levels of AI coding autonomy and the practical infrastructure—modular codebases, dynamic guardrails, and verifiable systems—required to evolve from writing code to managing AI agents.

🎚️ The Six Levels of Coding Autonomy 3 insights

From autocomplete to dark factory

Dan Shapiro's framework describes progression from "spicy autocomplete" (level 1) to fully autonomous "dark factories" (level 6) where agents operate as black boxes shipping code without human intervention.

Current adoption plateau

Most users currently operate between levels 2-3 (pair programming), while advanced practitioners reach level 4 where agents generate the majority of code for human review and trace verification.

Factory benefits

True software factories enable 24/7 throughput via scalable agents, consistent assembly-line outputs, and allow humans to leverage taste and creativity rather than manual coding.

🏗️ Building the Infrastructure 4 insights

Codebase primitives

Modular, colocated code with clear usage patterns (package.json scripts, auth methods) reduces discovery friction for agents, making the codebase more "in-distribution" for model comprehension.

Dynamic guardrails

Rules should emerge organically from observed agent failures rather than being pre-installed, while hooks restrict access to sensitive areas like encryption or authentication to prevent costly mistakes.

Verifiable outputs

Agents require automated testing capabilities—including unit tests, integration tests, and Playwright UI verification—to autonomously verify their work through recorded browser sessions and self-review.

Environmental enablers

Equip agents with skills (MCPs), feature flagging tools for safe deployment, and reproducible cloud VMs to enable autonomous scaling and true async operation.

👔 Running the Factory 4 insights

Worker to manager transition

Shift from writing code to overseeing async agent operations, inspecting outputs rather than raw code and aggregating changes upward as agent count scales.

Context frontloading

Provide detailed specs and architectural plans upfront before delegating long-running tasks to minimize interruptions and build alignment with model capabilities.

Parallelization strategy

Scope work carefully to prevent merge conflicts when running multiple agents simultaneously, treating each agent as a single unit of work on isolated codebase sections.

Preserve tribal knowledge

Maintain human understanding of data flows and critical system architecture rather than outsourcing all comprehension to agents.

Bottom Line

Treat your codebase as a factory floor by implementing verifiable testing systems, dynamic guardrails, and modular patterns that allow AI agents to work asynchronously while you shift from writing code to managing intent.

More from AI Engineer

View all
Building Generative Image & Video models at Scale - Sander Dieleman (Veo and Nano Banana)
40:46
AI Engineer AI Engineer

Building Generative Image & Video models at Scale - Sander Dieleman (Veo and Nano Banana)

Sander Dieleman from Google DeepMind explains the technical foundations of training large-scale generative image and video models like Veo, emphasizing that meticulous data curation and learned latent representations are as critical as the diffusion architecture itself. He details how diffusion models reverse a noise corruption process through iterative refinement rather than single-step prediction.

8 days ago · 6 points