Boris Cherny: How We Built Claude Code
TL;DR
Boris Cherny reveals how Claude Code emerged accidentally from a terminal prototype built to test Anthropic's API, emphasizing the philosophy of building for AI capabilities six months in the future rather than today's limitations, and evolving the product through observing latent user demand rather than rigid roadmaps.
🚀 Building for Tomorrow's Models 3 insights
Target frontier capabilities, not current limitations
Cherny advises founders to build for where models will be in six months, as capabilities improve rapidly. Claude Code was architected assuming models would eventually excel at coding tasks they initially performed poorly.
Aggressive rewriting over code preservation
The entire codebase has been rewritten multiple times over six months as capabilities evolved, with no original code remaining. This reflects a willingness to discard work as model capabilities make old scaffolding obsolete.
Plan mode obsolescence timeline
Cherny predicts explicit planning modes may become unnecessary within a month as models become capable of autonomous reasoning without structured prompting or explicit instructions.
🎯 Accidental Origins & Latent Demand 3 insights
Terminal as constraint, not strategy
The CLI form factor emerged accidentally because it required no UI development, not from strategic planning. This constraint became a strength, making the tool accessible without requiring knowledge of Vim, Tmux, or complex IDE configurations.
Users reveal needs through behavior
The team discovered latent demand by observing engineers creating markdown instruction files for the model, which evolved into the formal Claude MD feature rather than being designed top-down.
Organic viral adoption
Internal usage charts showed vertical growth without mandates—engineers shared the tool virally after Boris posted about it, with colleagues adopting it immediately despite prototype status.
🛠️ Minimalist Design Philosophy 3 insights
Shared context over personal prompts
Cherny maintains a minimal two-line personal Claude MD, storing all instructions in a shared codebase file that the team updates multiple times weekly, treating AI mistakes as opportunities to improve collective documentation.
Verbosity as debuggability feature
Attempts to summarize bash output faced user revolt because developers need transparency to catch when models go wrong, leading to configurable verbosity modes rather than forced brevity.
Delete and restart approach
When Claude MD grows too large, Cherny recommends deleting it entirely and starting fresh, adding back only what the new model strictly requires, as older scaffolding becomes unnecessary with capability advances.
Bottom Line
Build the simplest possible interface that solves today's problem while architecting for rapid obsolescence, because AI capabilities improve fast enough to invalidate complex scaffolding within months.
More from Y Combinator
View all
Personal AI Is the New Personal Computer
Y Combinator CEO Gary Tan details his return to software engineering after a 13-year hiatus, shipping hundreds of thousands of lines of code while running YC full-time by leveraging AI coding tools and developing "token maxing" methodologies that transform exhaustive research and development tasks into solo weekend projects.
How Razorpay Became India’s Largest Payments Company
Harshil Mathur recounts Razorpay's journey from a coding side project to India's largest payments platform, detailing their pivot from education to startups, the year-long regulatory wait that created competitive moats, and how surviving a bank crisis through radical customer transparency cemented their B2B trust foundation.
Beyond Bigger Models: Recursion As The Next Scaling Law In AI
Recursion at inference time—rather than simply scaling model size—may be the next breakthrough in AI reasoning. Recent research on Hierarchical Reasoning Models (HRM) and Tiny Recursive Models (TRM) demonstrates that recursive architectures using shared weights can solve complex reasoning benchmarks like Arc Prize with minimal parameters, outperforming massive traditional LLMs.
How to Build the Future: Demis Hassabis
Demis Hassabis predicts AGI by around 2030 and argues that while current large-scale pre-training and reinforcement learning form the foundation, breakthroughs in continual learning, memory consolidation, and introspective reasoning are still required to achieve true artificial general intelligence.