How METR measures Long Tasks and Experienced Open Source Dev Productivity - Joel Becker, METR
TL;DR
Joel Becker from METR argues that slowing compute growth would proportionally delay AI capabilities milestones measured by task time horizons, while presenting findings that experienced open-source developers showed minimal productivity gains from AI coding assistants like Cursor, challenging optimistic adoption curves.
📈 Compute Scaling & AI Timelines 3 insights
Compute-time horizon proportionality causes milestone delays
If compute growth slows by half, time horizon growth slows proportionally, potentially causing enormous delays in reaching AI milestones like automating one-month tasks.
Physical and economic constraints threaten compute growth
Power constraints and spending limits for large tech companies and nation states may bend the compute curve downward after 2030, directly impacting capability advancement speed.
Proportionality holds absent software-only singularity
This causal relationship between compute and time horizons persists only until a software singularity or unpredictable architectural breakthrough decouples software improvements from hardware scaling.
💻 Developer Productivity Findings 3 insights
Experienced developers show negligible Cursor speedup
A study of 16 experienced open-source developers using Cursor found minimal productivity gains, contradicting assumptions that AI tools automatically accelerate professional workflows.
Self-reported time estimates prove consistently unreliable
Developers consistently misestimate absolute time spent on tasks despite accurately reporting relative productivity feelings, making time-based surveys unreliable for capability forecasting.
Familiarity with tools shows minimal explanatory power
While Meta observed a J-curve with AI tool adoption, METR found no evidence that Cursor familiarity explained the null results among developers already experienced with LLMs.
🏗️ Evaluation Context & Limitations 3 insights
AI excels on legacy over open-source code
AI assistants demonstrate greater utility on disorganized legacy codebases lacking documentation compared to well-structured open-source projects optimized for human navigation.
Doubling time horizons break evaluation feasibility
As AI time horizons double, evaluation tasks eventually exceed feasible human monitoring periods, potentially breaking the metric's usefulness before maximum capabilities are reached.
Capability constraints outweigh human learning curves
The barrier to developer speedup appears rooted in fundamental AI capability limits rather than temporary human adoption friction or suboptimal prompting strategies.
Bottom Line
AI capability forecasting must account for potential compute constraints causing proportional delays in long-horizon task automation, while current evidence suggests experienced developers face fundamental capability limits with AI coding tools rather than temporary adoption friction.
More from AI Engineer
View all
Identity for AI Agents - Patrick Riley & Carlos Galan, Auth0
Auth0/Okta leaders Patrick Riley and Carlos Galan unveil new AI identity infrastructure including Token Vault for secure credential management and Async OAuth for human approvals, presenting a four-pillar framework to authenticate users and authorize autonomous agent actions across enterprise applications.
OpenAI + @Temporalio : Building Durable, Production Ready Agents - Cornelia Davis, Temporal
Cornelia Davis from Temporal demonstrates how integrating OpenAI's Agents SDK with Temporal's distributed systems platform creates production-ready AI agents that automatically handle crashes, retries, and state persistence without developers writing complex resilience code.
Your MCP Server is Bad (and you should feel bad) - Jeremiah Lowin, Prefect
Jeremiah Lowin argues that most MCP servers fail because developers treat them like REST APIs for humans rather than curated interfaces optimized for AI agents' specific constraints around discovery cost, iteration speed, and limited context windows.
Spec-Driven Development: Agentic Coding at FAANG Scale and Quality — Al Harris, Amazon Kiro
Amazon Principal Engineer Al Harris introduces Spec-Driven Development through Kiro, an agentic IDE that replaces unstructured 'vibe coding' with a formal workflow converting prompts into EARS-format requirements and property-based tests, enabling FAANG-scale reliability in AI-assisted development.