NVIDIA: OpenAI, Future of Compute, and the American Dream | BG2 w/ Bill Gurley and Brad Gerstner

| Podcasts | September 26, 2025 | 426 Thousand views | 1:44:15

TL;DR

Jensen Huang argues that AI compute demand will explode due to three compounding scaling laws (pre-training, post-training, and reasoning), while dismissing Wall Street fears of a coming glut by framing the shift from general-purpose to accelerated computing as a multi-trillion dollar infrastructure transition that is still in its early innings.

📈 The Three Scaling Laws of Compute 3 insights

Inference demand scaling to billions-X

Jensen confirms his prediction that inference compute will increase by billions of times (not just 100x or 1000x) as AI shifts from one-shot answers to chain-of-thought reasoning, where models think, research, and iterate before responding.

Three concurrent scaling laws

Compute growth is now driven by three simultaneous factors: pre-training (memorizing patterns), post-training (reinforcement learning and practice), and test-time inference (reasoning), all compounding each other.

AI evolving into concurrent agent systems

Modern AI is no longer a single language model but a system of multiple models running concurrently, using tools, performing research, and generating multimodal content like video, further multiplying compute requirements.

🏭 OpenAI's Hyperscaler Transformation 3 insights

OpenAI becoming the next multi-trillion dollar hyperscaler

Jensen explicitly states OpenAI will likely become the next multi-trillion dollar hyperscaler (comparable to Meta or Google) with both consumer and enterprise services, justifying NVIDIA's $100 billion investment in the company.

The Stargate infrastructure deal

NVIDIA is partnering with OpenAI to build 10 gigawatts of self-operated AI infrastructure (Stargate), which could generate upwards of $400 billion in revenue for NVIDIA if fully deployed with their chips.

Shift from outsourcing to full-stack ownership

OpenAI is transitioning from relying on Microsoft Azure to building their own 'AI factories' (similar to xAI's Colossus), giving them direct chip-level relationships with NVIDIA and the option to sell excess capacity to third parties.

📉 Market Reality vs. Wall Street Skepticism 3 insights

Analysts predicting growth flatline are missing the transition

While Wall Street consensus predicts NVIDIA's growth flatlining to 8% by 2027-2030 due to fears of a 'glut,' Jensen argues this ignores the fundamental shift from general-purpose computing to accelerated/AI computing that must refresh trillions of dollars of existing infrastructure.

The $50 trillion GDP augmentation opportunity

Human intelligence represents roughly $50 trillion of global GDP; as AI augments this workforce, Jensen estimates $10 trillion in value creation will require $5 trillion in annual AI infrastructure capex, suggesting a 4-5x expansion of the current $400 billion market.

Already at $1 trillion AI revenue when including hyperscaler transitions

The $1 trillion AI revenue target by 2030 is effectively already reached when accounting for hyperscalers like Meta, Google, and ByteDance shifting their entire revenue base from CPU-based recommenders and search to GPU-accelerated AI systems.

Supply Chains and Infrastructure Buildout 3 insights

Hyperscalers admit they underbuilt

After initial hesitation, major hyperscalers including Microsoft have accelerated investments because they 'dramatically underbuilt' relative to actual demand, particularly after the 'second exponential' of reasoning emerged in the last year.

Supply chain is geared for demand doubling

NVIDIA has 'plumbed the supply chain' from wafer starts to HBM memory and co-packaging; the company can now double output if needed and simply responds to demand signals from customers who consistently under-forecast their needs.

Data processing is the next massive market

Beyond generative AI, the next major transition will be traditional structured data processing (SQL, data warehouses) moving from CPUs to accelerated computing, representing the vast majority of the world's current CPU usage.

Bottom Line

The shift from general-purpose to accelerated computing is a fundamental infrastructure transition that will take years to complete, ensuring sustained demand growth as AI moves from simple inference to complex reasoning systems that augment trillions of dollars of global economic output.

More from BG2Pod

View all
ChatGPT – The Super Assistant Era | BG2 Guest Interview
1:03:41
BG2Pod BG2Pod

ChatGPT – The Super Assistant Era | BG2 Guest Interview

OpenAI's Nick Turley reveals ChatGPT evolved from a planned one-month demo to a 900-million-user product by prioritizing long-term retention over short-term revenue, with future growth hinging on transforming the AI from a passive chat tool into a proactive super assistant capable of autonomous action.

10 days ago · 10 points
AI Enterprise - Databricks & Glean | BG2 Guest Interview
45:01
BG2Pod BG2Pod

AI Enterprise - Databricks & Glean | BG2 Guest Interview

Databricks and Glean executives argue that while 95% of enterprise AI projects currently fail, this reflects necessary experimentation in a market where LLMs have become commodities and true competitive advantage comes from leveraging proprietary data through learning-based systems rather than brittle automation.

3 months ago · 10 points