Why Anthropic Thinks AI Should Have Its Own Computer — Felix Rieseberg of Claude Cowork/Code
TL;DR
Anthropic's Felix Rieseberg explains why AI agents need their own virtual computers to be effective, arguing that confining Claude to chat interfaces severely limits capability. He details how this philosophy shaped Claude Cowork and why product development is shifting from lengthy planning to rapidly building multiple prototypes simultaneously.
💻 The Virtual Machine Philosophy 2 insights
AI agents need dedicated computers, not just chat access
Rieseberg argues that limiting Claude to chat interfaces is like forcing a developer to work via email instead of giving them a computer, severely constraining their ability to solve complex problems.
VMs balance autonomy with safety
Running Claude in an isolated Linux virtual machine allows it to install tools (Python, Node.js) and manage files freely without pestering users for permissions, while network controls maintain security boundaries.
⚡ Cheap Execution Development 2 insights
Build all candidates instead of writing specs
Anthropic has shifted from drafting lengthy product memos to rapidly building multiple prototype candidates simultaneously, selecting winners based on user testing rather than theoretical planning.
Execution is now cheaper than deliberation
When facing technology choices, teams now build all options quickly and test with focus groups, as implementation cost has dropped to the point where validating through prototypes beats debating architectures.
🏗️ Cowork as Extensible Platform 2 insights
A superset, not a simplified version
Despite being marketed as 'user-friendly,' Cowork adds capabilities to Claude Code through VM isolation and deeper integrations, similar to how VS Code evolved from a 'simple' editor into the most extensible development platform.
Deep integration beats MCP configuration
Rather than forcing users to manually configure dozens of MCP connectors, Cowork achieves functionality through tight integration with Claude and Chrome agents, reducing friction while increasing power for non-technical users.
🌐 Future of Software Platforms 2 insights
Platforms beat hyper-personalization
Contrary to AI hype about everyone building custom software, Rieseberg argues that composable platforms become more valuable because reusing existing primitives is more efficient than rebuilding from scratch for every use case.
Reusable components create leverage
The value shifts to platforms that provide robust substrates and 'Lego pieces' that can be quickly assembled into specific workflows, rather than fragmented individual instances that require maintenance.
Bottom Line
Give AI agents their own isolated computer environments to maximize their capability, and adopt a rapid prototyping approach where you build multiple solution variants simultaneously rather than over-planning single implementations.
More from Latent Space
View all
🔬There Is No AlphaFold for Materials — AI for Materials Discovery with Heather Kulik
MIT professor Heather Kulik explains how AI discovered quantum phenomena to create 4x tougher polymers and why materials science lacks an 'AlphaFold' equivalent due to missing experimental datasets, emphasizing that domain expertise remains essential to validate AI predictions in chemistry.
Dreamer: the Agent OS for Everyone — David Singleton
David Singleton introduces Dreamer as an 'Agent OS' that combines a personal AI Sidekick with a marketplace of tools and agents, enabling both non-technical users and engineers to build, customize, and deploy AI applications through natural language while maintaining privacy through centralized, OS-level architecture.
⚡️Monty: the ultrafast Python interpreter by Agents for Agents — Samuel Colvin, Pydantic
Samuel Colvin from Pydantic introduces Monty, a Rust-based Python interpreter designed specifically for AI agents that achieves sub-microsecond execution latency by running in-process, bridging the gap between rigid tool calling and heavy containerized sandboxes.
NVIDIA's AI Engineers: Brev, Dynamo and Agent Inference at Planetary Scale and Speed of Light
NVIDIA engineers discuss securing AI agents through the 'two of three' capability rule, the evolution of Brev from startup to NVIDIA's developer experience layer, and how DGX Spark bridges local and cloud GPU workflows for a broader developer audience.