Elon Musk – "In 36 months, the cheapest place to put AI will be space”

| Podcasts | February 05, 2026 | 1.21 Million views | 2:49:46

TL;DR

Elon Musk argues that terrestrial power constraints will make Earth-based AI data centers economically unviable at scale within 36 months, predicting that orbital data centers powered by space-based solar will become the cheapest solution due to unlimited energy availability, higher solar efficiency, and regulatory arbitrage, requiring massive investments in Starship launches and domestic chip manufacturing.

The Terrestrial Power Crisis 3 insights

Flatlining global electricity

Outside of China, electrical output is nearly flat, creating a fundamental conflict with the exponentially growing power demands of AI chips, which will hit a wall by late 2025 when chips arrive faster than they can be powered.

Utility and turbine bottlenecks

The utility industry moves at "government speed" with interconnect studies taking a year, while gas turbines are sold out through 2030 due to a critical shortage of cast blades and vanes from only three global suppliers.

Solar deployment friction

Scaling terrestrial solar faces "gigantic" import tariffs, minimal domestic production capacity, and severe permitting delays for land use, making it impossible to move fast enough to meet AI power needs.

🛰️ The Orbital Advantage 3 insights

5x solar efficiency in space

Solar panels in space generate roughly five times more power than on Earth due to the lack of atmospheric interference, weather, and day-night cycles, effectively making them 10x cheaper when accounting for eliminated battery storage costs.

Regulatory arbitrage

Space bypasses the "permit hell" of terrestrial construction, allowing faster scaling than attempting to cover Nevada in panels, with Musk predicting space will be the cheapest option in 30-36 months.

Unlimited scaling potential

While Earth receives only half a billionth of the sun's energy, orbital data centers could eventually harness terawatts annually, with Musk forecasting that within five years, SpaceX could launch more AI compute capacity per year than exists cumulatively on Earth.

🏭 Infrastructure Constraints 3 insights

The real power math

A cluster of 330,000 GB300 GPUs requires roughly 1 gigawatt of generation capacity at the plant level—not the nominal chip power—but including 40% overhead for peak cooling in hot climates, networking hardware, and maintenance reserves.

Chip and memory shortages

Leading-edge fab capacity from TSMC and Samsung is maxed out with 5-year build times, while high-bandwidth memory (DDR) availability poses a greater constraint than logic chips, potentially becoming the primary bottleneck for large-scale training.

The TeraFab solution

To meet demand, Musk suggests building "TeraFabs" (terawatt-scale fabs) using conventional equipment in unconventional ways to produce 100 gigawatts of chips annually, while potentially manufacturing gas turbine blades internally to bypass supply shortages.

🌕 The Multi-Planetary Scale 2 insights

Starship launch cadence

Deploying 100 gigawatts of orbital compute annually requires approximately 10,000 Starship launches per year (roughly one per hour), which is feasible with a reusable fleet of just 20-30 ships given ground track reuse cycles.

Lunar manufacturing

For petawatt-scale expansion beyond Earth's launch capacity, the long-term vision involves mass drivers on the moon to launch solar arrays manufactured from lunar silicon (20% of soil) and aluminum, reducing launch mass from Earth.

Bottom Line

Within three years, the inability to build terrestrial power plants fast enough will force large-scale AI training into orbit, making investments in high-cadence space launch and orbital solar infrastructure the critical path to continued AI scaling.

More from Dwarkesh Patel

View all
Terence Tao – Kepler, Newton, and the true nature of mathematical discovery
1:23:44
Dwarkesh Patel Dwarkesh Patel

Terence Tao – Kepler, Newton, and the true nature of mathematical discovery

Mathematician Terence Tao compares Kepler's twenty-year process of testing random hypotheses against Tycho Brahe's dataset to modern AI capabilities, arguing that while artificial intelligence has eliminated the bottleneck of idea generation in science, it has simultaneously created an unprecedented crisis in verification and validation that current peer review systems cannot handle.

5 days ago · 8 points
Dylan Patel — The Single Biggest Bottleneck to Scaling AI Compute
2:31:04
Dwarkesh Patel Dwarkesh Patel

Dylan Patel — The Single Biggest Bottleneck to Scaling AI Compute

Dylan Patel explains that Big Tech's $600B CapEx represents multi-year pre-purchases of power and data centers through 2029, while AI labs face an immediate crunch where Anthropic's conservative compute strategy forces them to pay massive premiums on spot markets compared to OpenAI's aggressive long-term contracting.

12 days ago · 9 points
Dario Amodei — The highest-stakes financial model in history
2:22:20
Dwarkesh Patel Dwarkesh Patel

Dario Amodei — The highest-stakes financial model in history

Dario Amodei argues that AI capabilities are progressing along the expected exponential curve and are nearing the end of that rapid growth phase, with models likely to achieve expert-level coding within 1-2 years and 'country of geniuses' level capabilities within 10 years, despite public distraction from this reality.

about 1 month ago · 9 points