Dario Amodei — The highest-stakes financial model in history

| Podcasts | February 13, 2026 | 963 Thousand views | 2:22:20

TL;DR

Dario Amodei argues that AI capabilities are progressing along the expected exponential curve and are nearing the end of that rapid growth phase, with models likely to achieve expert-level coding within 1-2 years and 'country of geniuses' level capabilities within 10 years, despite public distraction from this reality.

🧮 Scaling Laws & The 'Big Blob of Compute' 3 insights

Seven factors determine AI progress

Amodei maintains his 2017 hypothesis that only raw compute, data quantity/quality, training duration, scalable objective functions, and numerical stability matter, while specific architectural cleverness contributes minimally.

RL scaling mirrors pre-training patterns

Reinforcement learning is exhibiting the same log-linear scaling laws as pre-training, with performance on math contests and coding improving predictably with increased training time.

Pre-training sits between evolution and learning

Unlike human brains which start with evolutionary priors, language models begin as blank slates, making pre-training more analogous to evolution while in-context learning resembles human on-the-spot adaptation.

⏱️ AGI Timelines & Capability Trajectories 3 insights

90% confidence on 10-year 'country of geniuses'

Amodei places 90% probability on achieving 'country of geniuses in a data center' by 2035, with only irreducible geopolitical risks like Taiwan invasion preventing progress.

Expert coding arrives in 1-2 years

For verifiable tasks like end-to-end software engineering, Amodei predicts near-complete automation within one to two years, excluding only irreducible uncertainty scenarios.

The verification gap remains the final hurdle

While models excel at verifiable tasks like math and coding, non-verifiable domains like scientific discovery or novel-writing represent the remaining frontier where generalization must still be proven.

🏭 Economic Impact & Labor Markets 3 insights

Automation follows a spectrum, not a switch

Progress moves from 90% of code written by AI, to 100% of code, to 90% of end-to-end tasks, to 90% reduced demand for software engineers—a sequence that unfolds over time rather than instantly eliminating jobs.

Anthropic's revenue shows explosive demand

Anthropic experienced approximately 10x annual revenue growth from $0 to $100M (2023), $100M to $1B (2024), and $1B to $9-10B (2025), with January 2026 adding several billion more.

Economic diffusion debates miss the transition speed

While skeptics cite slow economic diffusion as a brake on AI impact, Amodei argues the transition will be fast, noting that recursive self-improvement and immediate productivity tools represent intermediate paths between 'no impact' and 'Dyson sphere' extremes.

Bottom Line

Organizations should prepare for expert-level AI coding capabilities within 24 months and transformative economic disruption within the decade, as the exponential growth in AI capabilities is nearly complete and will likely generalize beyond verifiable tasks.

More from Dwarkesh Patel

View all
The math behind how LLMs are trained and served – Reiner Pope
2:13:41
Dwarkesh Patel Dwarkesh Patel

The math behind how LLMs are trained and served – Reiner Pope

Reiner Pope explains the mathematical mechanics behind LLM inference costs, demonstrating how 'Fast Mode' APIs charge premiums for smaller batch sizes that reduce latency, and why physical memory bandwidth constraints create hard limits on how fast or cheap inference can get regardless of budget.

10 days ago · 9 points
Michael Nielsen – How science actually progresses
2:03:04
Dwarkesh Patel Dwarkesh Patel

Michael Nielsen – How science actually progresses

Michael Nielsen dismantles the pop-science narrative of linear scientific progress through crisp experiments, revealing instead a messy, decentralized process where mathematical formalism often precedes conceptual understanding, expertise can blind researchers to truth, and communities adopt paradigm shifts long before experimental closure.

about 1 month ago · 10 points