François Chollet: ARC-AGI-3, Beyond Deep Learning & A New Approach To ML

| Business & Entrepreneurship | March 27, 2026 | 41.6 Thousand views | 57:24

TL;DR

François Chollet predicts AGI will arrive around 2030 but argues current deep learning is fundamentally inefficient; through his lab NDIA, he is pioneering symbolic program synthesis as a more optimal alternative focused on human-like skill acquisition efficiency, while acknowledging that LLM-based systems will first dominate domains with verifiable reward signals like coding.

AGI Timeline and Strategic Approach 2 insights

AGI expected by 2030

Chollet estimates AGI will likely emerge around 2030, coinciding with the release of ARC-AGI 6 or 7, and emphasizes that AI progress is unstoppable and accelerating.

Ride the acceleration wave

Rather than attempting to slow AI development, the critical question for builders is how to leverage and harness this unstoppable acceleration effectively.

🧠 NDIA: A New Symbolic Paradigm 3 insights

Program synthesis at the foundation

NDIA is developing a new machine learning substrate based on program synthesis that operates below the level of coding agents, replacing parametric curves with minimal symbolic models.

Symbolic descent vs gradient descent

The lab uses 'symbolic descent' to find the simplest possible symbolic models explaining data, adhering to the minimum description length principle for better generalization.

The case for divergent research

Chollet argues that while the industry consensus focuses on scaling LLMs, pursuing alternative approaches like his—despite only a 10-15% chance of success—is essential because no one else will.

💻 LLMs and Verifiable Domains 3 insights

Coding agents exploit verifiable rewards

The recent breakthrough in coding AI succeeds because code provides formally verifiable reward signals (unit tests, compilation), allowing models to generate trusted training data autonomously.

The verifiability divide

Domains with formal verification like mathematics will be rapidly automated, while fuzzy domains like essay writing will see slow progress due to reliance on expensive human annotations.

Inefficiency of current stack

While LLMs could theoretically simulate AGI with sufficient compute, Chollet argues this would be profoundly inefficient compared to future optimal approaches operating at lower levels.

📊 ARC-AGI and Intelligence Benchmarking 3 insights

Defining true AGI

Chollet defines AGI not as economic automation but as human-level skill acquisition efficiency—the ability to master new tasks with minimal data like humans do.

Origins of the benchmark

He created ARC-AGI after discovering in 2016 that gradient descent could not learn generalizable reasoning algorithms, instead overfitting to surface patterns rather than discovering underlying programs.

Evolution to ARC-AGI-3

ARC-AGI V1 was too difficult for early models, V2 is now saturating, and V3 continues to measure true generalization as the field advances toward human-level sample efficiency.

Bottom Line

While current LLMs will dominate verifiable domains like coding and mathematics, achieving true AGI requires abandoning inefficient parametric learning for symbolic program synthesis that prioritizes minimal description length and human-like sample efficiency.

More from Y Combinator

View all
Personal AI Is the New Personal Computer
41:30
Y Combinator Y Combinator

Personal AI Is the New Personal Computer

Y Combinator CEO Gary Tan details his return to software engineering after a 13-year hiatus, shipping hundreds of thousands of lines of code while running YC full-time by leveraging AI coding tools and developing "token maxing" methodologies that transform exhaustive research and development tasks into solo weekend projects.

7 days ago · 10 points
How Razorpay Became India’s Largest Payments Company
31:35
Y Combinator Y Combinator

How Razorpay Became India’s Largest Payments Company

Harshil Mathur recounts Razorpay's journey from a coding side project to India's largest payments platform, detailing their pivot from education to startups, the year-long regulatory wait that created competitive moats, and how surviving a bank crisis through radical customer transparency cemented their B2B trust foundation.

9 days ago · 9 points
Beyond Bigger Models: Recursion As The Next Scaling Law In AI
37:53
Y Combinator Y Combinator

Beyond Bigger Models: Recursion As The Next Scaling Law In AI

Recursion at inference time—rather than simply scaling model size—may be the next breakthrough in AI reasoning. Recent research on Hierarchical Reasoning Models (HRM) and Tiny Recursive Models (TRM) demonstrates that recursive architectures using shared weights can solve complex reasoning benchmarks like Arc Prize with minimal parameters, outperforming massive traditional LLMs.

14 days ago · 8 points
How to Build the Future: Demis Hassabis
40:57
Y Combinator Y Combinator

How to Build the Future: Demis Hassabis

Demis Hassabis predicts AGI by around 2030 and argues that while current large-scale pre-training and reinforcement learning form the foundation, breakthroughs in continual learning, memory consolidation, and introspective reasoning are still required to achieve true artificial general intelligence.

16 days ago · 8 points