François Chollet: ARC-AGI-3, Beyond Deep Learning & A New Approach To ML

| Business & Entrepreneurship | March 27, 2026 | 24.1 Thousand views | 57:24

TL;DR

François Chollet predicts AGI will arrive around 2030 but argues current deep learning is fundamentally inefficient; through his lab NDIA, he is pioneering symbolic program synthesis as a more optimal alternative focused on human-like skill acquisition efficiency, while acknowledging that LLM-based systems will first dominate domains with verifiable reward signals like coding.

AGI Timeline and Strategic Approach 2 insights

AGI expected by 2030

Chollet estimates AGI will likely emerge around 2030, coinciding with the release of ARC-AGI 6 or 7, and emphasizes that AI progress is unstoppable and accelerating.

Ride the acceleration wave

Rather than attempting to slow AI development, the critical question for builders is how to leverage and harness this unstoppable acceleration effectively.

🧠 NDIA: A New Symbolic Paradigm 3 insights

Program synthesis at the foundation

NDIA is developing a new machine learning substrate based on program synthesis that operates below the level of coding agents, replacing parametric curves with minimal symbolic models.

Symbolic descent vs gradient descent

The lab uses 'symbolic descent' to find the simplest possible symbolic models explaining data, adhering to the minimum description length principle for better generalization.

The case for divergent research

Chollet argues that while the industry consensus focuses on scaling LLMs, pursuing alternative approaches like his—despite only a 10-15% chance of success—is essential because no one else will.

💻 LLMs and Verifiable Domains 3 insights

Coding agents exploit verifiable rewards

The recent breakthrough in coding AI succeeds because code provides formally verifiable reward signals (unit tests, compilation), allowing models to generate trusted training data autonomously.

The verifiability divide

Domains with formal verification like mathematics will be rapidly automated, while fuzzy domains like essay writing will see slow progress due to reliance on expensive human annotations.

Inefficiency of current stack

While LLMs could theoretically simulate AGI with sufficient compute, Chollet argues this would be profoundly inefficient compared to future optimal approaches operating at lower levels.

📊 ARC-AGI and Intelligence Benchmarking 3 insights

Defining true AGI

Chollet defines AGI not as economic automation but as human-level skill acquisition efficiency—the ability to master new tasks with minimal data like humans do.

Origins of the benchmark

He created ARC-AGI after discovering in 2016 that gradient descent could not learn generalizable reasoning algorithms, instead overfitting to surface patterns rather than discovering underlying programs.

Evolution to ARC-AGI-3

ARC-AGI V1 was too difficult for early models, V2 is now saturating, and V3 continues to measure true generalization as the field advances toward human-level sample efficiency.

Bottom Line

While current LLMs will dominate verifiable domains like coding and mathematics, achieving true AGI requires abandoning inefficient parametric learning for symbolic program synthesis that prioritizes minimal description length and human-like sample efficiency.

More from Y Combinator

View all
Agents For Non-Technical Users
39:33
Y Combinator Y Combinator

Agents For Non-Technical Users

Emergent founders Makund and Madav Jar discuss pivoting from enterprise testing tools to a consumer AI platform that enables non-technical users to build production-ready software, achieving 7 million apps built in 8 months by architecting proprietary infrastructure and hiding technical complexity.

15 days ago · 10 points
How To Build The Future: Max Hodak
53:21
Y Combinator Y Combinator

How To Build The Future: Max Hodak

Max Hodak argues that BCIs represent a fundamental shift from incremental biotech to a precise engineering paradigm capable of restoring senses and potentially extending human lifespan to centuries, with current technology already restoring vision to the blind and paving the way for cognitive enhancement.

22 days ago · 9 points
Common Mistakes With Vibe Coded Websites
37:27
Y Combinator Y Combinator

Common Mistakes With Vibe Coded Websites

AI coding tools have made professional web design accessible to everyone, but they've also accelerated the spread of visual clichés—like purple gradients and distracting hover effects—that prioritize technical impressiveness over brand distinctiveness and user experience.

25 days ago · 9 points
Boris Cherny: How We Built Claude Code
50:11
Y Combinator Y Combinator

Boris Cherny: How We Built Claude Code

Boris Cherny reveals how Claude Code emerged accidentally from a terminal prototype built to test Anthropic's API, emphasizing the philosophy of building for AI capabilities six months in the future rather than today's limitations, and evolving the product through observing latent user demand rather than rigid roadmaps.

about 1 month ago · 9 points