How METR measures Long Tasks and Experienced Open Source Dev Productivity - Joel Becker, METR

| Podcasts | January 19, 2026 | 9.3 Thousand views | 1:15:52

TL;DR

Joel Becker from METR argues that slowing compute growth would proportionally delay AI capabilities milestones measured by task time horizons, while presenting findings that experienced open-source developers showed minimal productivity gains from AI coding assistants like Cursor, challenging optimistic adoption curves.

📈 Compute Scaling & AI Timelines 3 insights

Compute-time horizon proportionality causes milestone delays

If compute growth slows by half, time horizon growth slows proportionally, potentially causing enormous delays in reaching AI milestones like automating one-month tasks.

Physical and economic constraints threaten compute growth

Power constraints and spending limits for large tech companies and nation states may bend the compute curve downward after 2030, directly impacting capability advancement speed.

Proportionality holds absent software-only singularity

This causal relationship between compute and time horizons persists only until a software singularity or unpredictable architectural breakthrough decouples software improvements from hardware scaling.

💻 Developer Productivity Findings 3 insights

Experienced developers show negligible Cursor speedup

A study of 16 experienced open-source developers using Cursor found minimal productivity gains, contradicting assumptions that AI tools automatically accelerate professional workflows.

Self-reported time estimates prove consistently unreliable

Developers consistently misestimate absolute time spent on tasks despite accurately reporting relative productivity feelings, making time-based surveys unreliable for capability forecasting.

Familiarity with tools shows minimal explanatory power

While Meta observed a J-curve with AI tool adoption, METR found no evidence that Cursor familiarity explained the null results among developers already experienced with LLMs.

🏗️ Evaluation Context & Limitations 3 insights

AI excels on legacy over open-source code

AI assistants demonstrate greater utility on disorganized legacy codebases lacking documentation compared to well-structured open-source projects optimized for human navigation.

Doubling time horizons break evaluation feasibility

As AI time horizons double, evaluation tasks eventually exceed feasible human monitoring periods, potentially breaking the metric's usefulness before maximum capabilities are reached.

Capability constraints outweigh human learning curves

The barrier to developer speedup appears rooted in fundamental AI capability limits rather than temporary human adoption friction or suboptimal prompting strategies.

Bottom Line

AI capability forecasting must account for potential compute constraints causing proportional delays in long-horizon task automation, while current evidence suggests experienced developers face fundamental capability limits with AI coding tools rather than temporary adoption friction.

More from AI Engineer

View all
Identity for AI Agents - Patrick Riley & Carlos Galan, Auth0
1:22:12
AI Engineer AI Engineer

Identity for AI Agents - Patrick Riley & Carlos Galan, Auth0

Auth0/Okta leaders Patrick Riley and Carlos Galan unveil new AI identity infrastructure including Token Vault for secure credential management and Async OAuth for human approvals, presenting a four-pillar framework to authenticate users and authorize autonomous agent actions across enterprise applications.

2 months ago · 8 points