How to Build the Future: Demis Hassabis

| Business & Entrepreneurship | April 29, 2026 | 116 Thousand views | 40:57

TL;DR

Demis Hassabis predicts AGI by around 2030 and argues that while current large-scale pre-training and reinforcement learning form the foundation, breakthroughs in continual learning, memory consolidation, and introspective reasoning are still required to achieve true artificial general intelligence.

🎯 AGI Architecture & Timeline 2 insights

Current paradigm is foundation but incomplete

Hassabis believes pre-training, RLHF, and chain-of-thought will be part of AGI's final architecture, but one or two major innovations—likely in memory and reasoning—are still missing to solve intelligence.

AGI timeline circa 2030

With a personal AGI timeline of approximately 2030, Hassabis advises deep tech founders to build assuming general intelligence will emerge mid-journey, requiring strategic planning for a transformative midpoint.

🧠 Memory & Continual Learning 2 insights

Context windows are temporary 'duct tape'

Current models rely on massive context windows as brute-force working memory, but even million-token windows only capture roughly 20 minutes of video, making continual learning essential for long-term adaptation.

Neuroscience-inspired consolidation needed

Drawing from his PhD on hippocampal function, Hassabis explains that unlike the brain's sleep-based memory replay, stateless models cannot gracefully integrate new knowledge without expensive retraining.

🔄 Agents & Reasoning 2 insights

Agents are the necessary path to AGI

Hassabis states that active problem-solving systems are non-negotiable for AGI, placing agents at DeepMind's center, though current capabilities remain experimental rather than reliable 'fire and forget' tools.

Reasoning lacks introspection

Current chain-of-thought reasoning is too brute-force; models need mechanisms to monitor their own thinking to avoid overthinking loops and elementary errors, potentially borrowing AlphaGo's Monte Carlo Tree Search techniques.

Efficiency & Edge Deployment 2 insights

Distillation shows no theoretical limits

DeepMind's Flash models achieve 95% of frontier performance at one-tenth the cost, with Hassabis seeing no information-theoretic density limit preventing smaller models from approaching larger ones within six to twelve months.

Edge computing enables privacy and robotics

Efficient small models allow local processing of sensitive audio and visual data on devices, reducing latency while maintaining privacy, with future home robotics requiring powerful local models orchestrated selectively with cloud systems.

Bottom Line

Build assuming AGI arrives by 2030, focusing on agentic workflows that can adapt through continual learning while leveraging efficient edge models for privacy-sensitive applications.

More from Y Combinator

View all
Beyond Bigger Models: Recursion As The Next Scaling Law In AI
37:53
Y Combinator Y Combinator

Beyond Bigger Models: Recursion As The Next Scaling Law In AI

Recursion at inference time—rather than simply scaling model size—may be the next breakthrough in AI reasoning. Recent research on Hierarchical Reasoning Models (HRM) and Tiny Recursive Models (TRM) demonstrates that recursive architectures using shared weights can solve complex reasoning benchmarks like Arc Prize with minimal parameters, outperforming massive traditional LLMs.

1 day ago · 8 points
The $9B startup that wants to create a billion new developers
39:12
Y Combinator Y Combinator

The $9B startup that wants to create a billion new developers

Replit CEO Amjad Msad explains how the $9 billion startup evolved from a browser IDE into an AI-native 'vibe coding' platform that eliminates traditional coding entirely, enabling non-technical domain experts to build production-grade software through natural language and visual interfaces.

8 days ago · 10 points
How Stripe Built Their New Website
43:37
Y Combinator Y Combinator

How Stripe Built Their New Website

Stripe Head of Design Katie Dill details the year-long process behind rebuilding Stripe's homepage after six years, shifting from a static payment-processor narrative to an interactive manifesto showcasing their full financial infrastructure platform through intentional UX, progressive disclosure, and meticulous animation craftsmanship.

11 days ago · 9 points
Robots Are Finally Starting to Work
49:27
Y Combinator Y Combinator

Robots Are Finally Starting to Work

Physical Intelligence co-founder Quan Vang explains why robotics is approaching its 'GPT-1 moment,' where cross-embodiment AI models trained on diverse hardware are beginning to exhibit emergent zero-shot capabilities and scaling laws previously unseen in the field.

17 days ago · 9 points