How METR measures Long Tasks and Experienced Open Source Dev Productivity - Joel Becker, METR
TL;DR
Joel Becker from METR argues that slowing compute growth would proportionally delay AI capabilities milestones measured by task time horizons, while presenting findings that experienced open-source developers showed minimal productivity gains from AI coding assistants like Cursor, challenging optimistic adoption curves.
📈 Compute Scaling & AI Timelines 3 insights
Compute-time horizon proportionality causes milestone delays
If compute growth slows by half, time horizon growth slows proportionally, potentially causing enormous delays in reaching AI milestones like automating one-month tasks.
Physical and economic constraints threaten compute growth
Power constraints and spending limits for large tech companies and nation states may bend the compute curve downward after 2030, directly impacting capability advancement speed.
Proportionality holds absent software-only singularity
This causal relationship between compute and time horizons persists only until a software singularity or unpredictable architectural breakthrough decouples software improvements from hardware scaling.
💻 Developer Productivity Findings 3 insights
Experienced developers show negligible Cursor speedup
A study of 16 experienced open-source developers using Cursor found minimal productivity gains, contradicting assumptions that AI tools automatically accelerate professional workflows.
Self-reported time estimates prove consistently unreliable
Developers consistently misestimate absolute time spent on tasks despite accurately reporting relative productivity feelings, making time-based surveys unreliable for capability forecasting.
Familiarity with tools shows minimal explanatory power
While Meta observed a J-curve with AI tool adoption, METR found no evidence that Cursor familiarity explained the null results among developers already experienced with LLMs.
🏗️ Evaluation Context & Limitations 3 insights
AI excels on legacy over open-source code
AI assistants demonstrate greater utility on disorganized legacy codebases lacking documentation compared to well-structured open-source projects optimized for human navigation.
Doubling time horizons break evaluation feasibility
As AI time horizons double, evaluation tasks eventually exceed feasible human monitoring periods, potentially breaking the metric's usefulness before maximum capabilities are reached.
Capability constraints outweigh human learning curves
The barrier to developer speedup appears rooted in fundamental AI capability limits rather than temporary human adoption friction or suboptimal prompting strategies.
Bottom Line
AI capability forecasting must account for potential compute constraints causing proportional delays in long-horizon task automation, while current evidence suggests experienced developers face fundamental capability limits with AI coding tools rather than temporary adoption friction.
More from AI Engineer
View all
Agentic Search for Context Engineering — Leonie Monigatti, Elastic
Leonie Monigatti from Elastic argues that context engineering is fundamentally 80% agentic search, evolving from rigid RAG pipelines to dynamic agent-driven retrieval that must navigate diverse context sources through carefully curated, specialized search tools.
Playground in Prod - Optimising Agents in Production Environments — Samuel Colvin, Pydantic
Samuel Colvin demonstrates optimizing AI agent prompts in production using Jepper, a genetic algorithm library that breeds high-performing prompt variations, combined with Logfire's managed variables for structured configuration and deterministic evaluation against golden datasets.
Vibe Engineering Effect Apps — Michael Arnaldi, Effectful
Michael Arnaldi demonstrates "vibe engineering" by building a TypeScript project with AI agents, revealing that cloning library repositories directly into your codebase—rather than using npm packages—enables AI to learn patterns from source code, while strict TypeScript and custom lint rules act as essential guardrails.
Everything You Need To Know About Agent Observability — Danny Gollapalli and Ben Hylak, Raindrop
As AI agents grow more complex and autonomous, traditional pre-deployment testing fails to catch the infinite edge cases of production behavior. The video outlines a new observability paradigm combining explicit system metrics with implicit semantic signals and self-diagnostics to monitor agents in real-time.