The history and future of AI at Google, with Sundar Pichai
TL;DR
Sundar Pichai argues that Google's invention of Transformers and early work on LaMDa positioned it for the AI era, emphasizing that vertical integration—from TPUs to strict latency budgets—enables the company to treat AI as an expansionary force driving search toward agentic workflows rather than a zero-sum threat.
🧬 The Transformer Legacy and Product Constraints 3 insights
Transformers emerged from product needs
Google researchers developed Transformers to solve specific scaling challenges like translation and TPU inference for two billion users, immediately deploying them via BERT and MUM to achieve the largest search quality jumps in company history.
LaMDa existed before ChatGPT
Google had already built LaMDa—an internal ChatGPT equivalent—but constrained its release due to toxicity concerns and a higher product quality bar rooted in search reliability standards.
Coding revealed the capability curve
OpenAI recognized the potential of large language models earlier partly because the coding use case via GitHub Copilot demonstrated more pronounced sequential jumps between GPT versions than general language tasks alone.
⚡ Infrastructure and the Speed Advantage 3 insights
Millisecond latency budgets enforce speed
Search teams operate under strict latency budgets where shaving off milliseconds earns credits for future features, resulting in a 30% latency improvement over five years despite massive AI capability additions.
Gemini Flash optimizes capability per millisecond
Google deliberately trades maximum capability for speed with Flash models that deliver 90% of Pro model performance at radically lower latency, enabled by vertical integration with seventh-generation TPUs.
Speed reflects technical health
Pichai views latency as a distinguishing feature that almost always reflects superior technical underpinnings, requiring rigorous balancing between the frontier of capabilities and user-perceived responsiveness.
🔮 Agentic Search and Market Reality 3 insights
Search becomes an agent manager
The future of search involves handling asynchronous, long-running tasks where users delegate complex workflows to AI agents rather than typing one-line queries into a search box.
Sentiment shift validated vertical strategy
After negative sentiment drove shares to approximately $150, Gemini 2.5's multimodal capabilities demonstrated Google's full-stack strength, justifying plans to scale CapEx from $30 billion to roughly $175-180 billion.
AI is expansionary, not zero-sum
Google views the AI transition as analogous to the shift from single-cell to complex organisms, where Search and Gemini will overlap and diverge simultaneously rather than cannibalize each other.
Bottom Line
Bet on vertical integration and treat AI as an expansionary force that requires balancing frontier capabilities with strict latency discipline to capture the shift toward agentic computing.
More from Stripe
View all
Compliance at scale and why TAM is a distraction, with Christina Cacioppo of Vanta
Christina Cacioppo explains how Vanta turned compliance from a bureaucratic burden into a scalable security platform serving 15,000+ customers, revealing why compliance—not security—is the true forcing function for startup infrastructure and how automated monitoring transforms periodic audits into continuous readiness.
The 20-year journey to fully autonomous cars with Dmitri Dolgov of Waymo
Waymo Co-CEO Dmitri Dolgov details the 20-year technical evolution from Google's self-driving moonshot to 500,000 weekly autonomous rides, explaining why full autonomy requires augmenting end-to-end AI with structured intermediate representations and a 'three teachers' training framework rather than relying solely on scaled-up vision models.
Creating prediction markets (and suing the CFTC) with Tarek Mansour and Luana Lopes Lara
Kalshi founders Tarek Mansour and Luana Lopes Lara recount their four-year battle to launch the first CFTC-regulated prediction market in the US, culminating in a lawsuit against their own regulator to offer election contracts, and why their 'permission-first' approach ultimately enabled $10+ billion monthly volumes.
Bret Taylor of Sierra on AI agents, outcome-based pricing, and the OpenAI board
Bret Taylor explores how AI agents are shifting from polished but forgetful tools to messy, context-rich systems that leverage markdown memory and code repository structures, predicting software engineering will evolve from writing code to crafting 'harnesses' of documentation while enterprises move beyond APIs toward agent-accessible infrastructure.