Stanford CS336 Language Modeling from Scratch | Spring 2026 | Lecture 10: Inference
TL;DR
Inference now dominates AI economics, with OpenAI generating 8.6 trillion tokens daily—exceeding frontier model training compute in under four days. Unlike training, autoregressive inference cannot parallelize across sequences, making it fundamentally memory-bandwidth bound rather than compute bound, with batch sizes under 295 on H100s failing to saturate GPU capacity.
💰 The Economics of Inference at Scale 3 insights
Inference costs dwarf training rapidly
OpenAI generates 8.6 trillion tokens daily, processing more compute in under four days than DeepSeek v4's entire 32-trillion-token training run.
Agentic workloads remove throughput ceilings
Unlike chatbots constrained by human reading speeds, AI agents generate tokens for internal reasoning and tool use, creating unlimited demand for inference compute.
Three distinct performance metrics matter
Time to First Token (TTFT) drives user perception in interactive apps, latency measures individual streaming speed, and throughput measures batch processing efficiency.
🧮 Why Inference Hits the Memory Wall 3 insights
Autoregressive generation prevents parallelization
Training processes all sequence tokens simultaneously, but inference generates one token at a time, eliminating the sequence dimension as a parallelization opportunity.
Small batches make GPUs memory-bound
Matrix operations with batch size B=1 achieve arithmetic intensity near 1, while H100 GPUs require intensity greater than 295 to be compute-bound.
Hardware intensity mismatch defines bottlenecks
When arithmetic intensity falls below the accelerator's theoretical ratio, the system becomes memory-bandwidth limited, which is the default state for inference.
🏗️ Transformer Architecture Mechanics 3 insights
Notation defines tensor dimensions precisely
B=batch size, T=sequence length, D=model dimension, H=head dimension, F=4D (MLP expansion), with S denoting input tokens and T denoting output tokens.
Group Query Attention reduces KV heads
The architecture splits N query heads into K key-value groups with G heads per group, reducing memory for cached keys and values.
Naive inference scales cubically with length
Without caching, generating T tokens requires O(T²) attention computation per step and O(T³) total time.
Bottom Line
Inference efficiency is constrained by memory bandwidth, not compute; optimizing requires maximizing batch sizes to improve arithmetic intensity above hardware thresholds or aggressively reducing memory movement through quantization and KV cache optimization.
More from Stanford Online
View all
Stanford Robotics Seminar ENGR319 | Spring 2026 | Unlocking Autonomous Medical Robotics
This seminar outlines a roadmap for autonomous surgical robotics to address critical healthcare labor shortages, proposing a physics-based approach built on four pillars—perception, modeling, planning, and control—that achieves sub-2mm precision through real-time digital twinning rather than relying on data-scarce foundation models.
Stanford CME296 Diffusion & Large Vision Models | Spring 2026 | Lecture 5 - Architectures
This lecture transitions from theoretical foundations to practical architecture design for diffusion models, explaining how U-Net structures leverage convolutional inductive biases, hierarchical downsampling for global context, and skip connections to preserve local details while maintaining strict dimensional requirements for iterative denoising.
Stanford CS25: Transformers United V6 I From Next-Token Prediction to Next-Generation Intelligence
Shrimai Prabhumoye presents advanced LLM pre-training strategies from her work at Nvidia, demonstrating that curriculum learning (two-phase training) and front-loading reasoning data during pre-training create stronger foundations and durable performance gains that cannot be matched by increased compute in later stages.
Stanford CS25: Transformers United V6 I The Ultra-Scale Talk: Scaling Training to Thousands of GPUs
Nuaman Tazzy from HuggingFace explains how to scale transformer training to thousands of GPUs using data parallelism strategies, from basic Distributed Data Parallel (DDP) to Fully Sharded Data Parallel (FSDP/ZeRO), emphasizing memory optimization techniques and the critical importance of overlapping communication with computation to keep GPUs fully utilized.