Elon Musk – "In 36 months, the cheapest place to put AI will be space”

| Podcasts | February 05, 2026 | 1.29 Million views | 2:49:46

TL;DR

Elon Musk argues that terrestrial power constraints will make Earth-based AI data centers economically unviable at scale within 36 months, predicting that orbital data centers powered by space-based solar will become the cheapest solution due to unlimited energy availability, higher solar efficiency, and regulatory arbitrage, requiring massive investments in Starship launches and domestic chip manufacturing.

The Terrestrial Power Crisis 3 insights

Flatlining global electricity

Outside of China, electrical output is nearly flat, creating a fundamental conflict with the exponentially growing power demands of AI chips, which will hit a wall by late 2025 when chips arrive faster than they can be powered.

Utility and turbine bottlenecks

The utility industry moves at "government speed" with interconnect studies taking a year, while gas turbines are sold out through 2030 due to a critical shortage of cast blades and vanes from only three global suppliers.

Solar deployment friction

Scaling terrestrial solar faces "gigantic" import tariffs, minimal domestic production capacity, and severe permitting delays for land use, making it impossible to move fast enough to meet AI power needs.

🛰️ The Orbital Advantage 3 insights

5x solar efficiency in space

Solar panels in space generate roughly five times more power than on Earth due to the lack of atmospheric interference, weather, and day-night cycles, effectively making them 10x cheaper when accounting for eliminated battery storage costs.

Regulatory arbitrage

Space bypasses the "permit hell" of terrestrial construction, allowing faster scaling than attempting to cover Nevada in panels, with Musk predicting space will be the cheapest option in 30-36 months.

Unlimited scaling potential

While Earth receives only half a billionth of the sun's energy, orbital data centers could eventually harness terawatts annually, with Musk forecasting that within five years, SpaceX could launch more AI compute capacity per year than exists cumulatively on Earth.

🏭 Infrastructure Constraints 3 insights

The real power math

A cluster of 330,000 GB300 GPUs requires roughly 1 gigawatt of generation capacity at the plant level—not the nominal chip power—but including 40% overhead for peak cooling in hot climates, networking hardware, and maintenance reserves.

Chip and memory shortages

Leading-edge fab capacity from TSMC and Samsung is maxed out with 5-year build times, while high-bandwidth memory (DDR) availability poses a greater constraint than logic chips, potentially becoming the primary bottleneck for large-scale training.

The TeraFab solution

To meet demand, Musk suggests building "TeraFabs" (terawatt-scale fabs) using conventional equipment in unconventional ways to produce 100 gigawatts of chips annually, while potentially manufacturing gas turbine blades internally to bypass supply shortages.

🌕 The Multi-Planetary Scale 2 insights

Starship launch cadence

Deploying 100 gigawatts of orbital compute annually requires approximately 10,000 Starship launches per year (roughly one per hour), which is feasible with a reusable fleet of just 20-30 ships given ground track reuse cycles.

Lunar manufacturing

For petawatt-scale expansion beyond Earth's launch capacity, the long-term vision involves mass drivers on the moon to launch solar arrays manufactured from lunar silicon (20% of soil) and aluminum, reducing launch mass from Earth.

Bottom Line

Within three years, the inability to build terrestrial power plants fast enough will force large-scale AI training into orbit, making investments in high-cadence space launch and orbital solar infrastructure the critical path to continued AI scaling.

More from Dwarkesh Patel

View all
The math behind how LLMs are trained and served – Reiner Pope
2:13:41
Dwarkesh Patel Dwarkesh Patel

The math behind how LLMs are trained and served – Reiner Pope

Reiner Pope explains the mathematical mechanics behind LLM inference costs, demonstrating how 'Fast Mode' APIs charge premiums for smaller batch sizes that reduce latency, and why physical memory bandwidth constraints create hard limits on how fast or cheap inference can get regardless of budget.

10 days ago · 9 points
Michael Nielsen – How science actually progresses
2:03:04
Dwarkesh Patel Dwarkesh Patel

Michael Nielsen – How science actually progresses

Michael Nielsen dismantles the pop-science narrative of linear scientific progress through crisp experiments, revealing instead a messy, decentralized process where mathematical formalism often precedes conceptual understanding, expertise can blind researchers to truth, and communities adopt paradigm shifts long before experimental closure.

about 1 month ago · 10 points