Stanford CS336 Language Modeling from Scratch | Spring 2026 | Lecture 10: Inference

| Podcasts | May 11, 2026 | 358 views | 1:25:30

TL;DR

Inference now dominates AI economics, with OpenAI generating 8.6 trillion tokens daily—exceeding frontier model training compute in under four days. Unlike training, autoregressive inference cannot parallelize across sequences, making it fundamentally memory-bandwidth bound rather than compute bound, with batch sizes under 295 on H100s failing to saturate GPU capacity.

💰 The Economics of Inference at Scale 3 insights

Inference costs dwarf training rapidly

OpenAI generates 8.6 trillion tokens daily, processing more compute in under four days than DeepSeek v4's entire 32-trillion-token training run.

Agentic workloads remove throughput ceilings

Unlike chatbots constrained by human reading speeds, AI agents generate tokens for internal reasoning and tool use, creating unlimited demand for inference compute.

Three distinct performance metrics matter

Time to First Token (TTFT) drives user perception in interactive apps, latency measures individual streaming speed, and throughput measures batch processing efficiency.

🧮 Why Inference Hits the Memory Wall 3 insights

Autoregressive generation prevents parallelization

Training processes all sequence tokens simultaneously, but inference generates one token at a time, eliminating the sequence dimension as a parallelization opportunity.

Small batches make GPUs memory-bound

Matrix operations with batch size B=1 achieve arithmetic intensity near 1, while H100 GPUs require intensity greater than 295 to be compute-bound.

Hardware intensity mismatch defines bottlenecks

When arithmetic intensity falls below the accelerator's theoretical ratio, the system becomes memory-bandwidth limited, which is the default state for inference.

🏗️ Transformer Architecture Mechanics 3 insights

Notation defines tensor dimensions precisely

B=batch size, T=sequence length, D=model dimension, H=head dimension, F=4D (MLP expansion), with S denoting input tokens and T denoting output tokens.

Group Query Attention reduces KV heads

The architecture splits N query heads into K key-value groups with G heads per group, reducing memory for cached keys and values.

Naive inference scales cubically with length

Without caching, generating T tokens requires O(T²) attention computation per step and O(T³) total time.

Bottom Line

Inference efficiency is constrained by memory bandwidth, not compute; optimizing requires maximizing batch sizes to improve arithmetic intensity above hardware thresholds or aggressively reducing memory movement through quantization and KV cache optimization.

More from Stanford Online

View all
Stanford CS25: Transformers United V6 I The Ultra-Scale Talk: Scaling Training to Thousands of GPUs
1:01:48
Stanford Online Stanford Online

Stanford CS25: Transformers United V6 I The Ultra-Scale Talk: Scaling Training to Thousands of GPUs

Nuaman Tazzy from HuggingFace explains how to scale transformer training to thousands of GPUs using data parallelism strategies, from basic Distributed Data Parallel (DDP) to Fully Sharded Data Parallel (FSDP/ZeRO), emphasizing memory optimization techniques and the critical importance of overlapping communication with computation to keep GPUs fully utilized.

about 7 hours ago · 9 points