Solving the Wrong Problem Works Better - Robert Lange

| Podcasts | March 13, 2026 | 22.1 Thousand views | 1:18:07

TL;DR

Robert Lange from Sakana AI explains how evolutionary systems like Shinka Evolve demonstrate that scientific breakthroughs require co-evolving problems and solutions through diverse stepping stones, while current LLMs remain constrained by human-defined objectives and fail to generate autonomous novelty.

🧬 Evolutionary Discovery Principles 2 insights

Stepping stones precede breakthroughs

Innovation follows an evolutionary tree where diverse intermediate discoveries must be collected before converging on solutions, rather than through direct optimization toward fixed goals.

Problem invention enables solutions

True creativity often requires inventing new problems recursively before solving them, a capability current AI systems lack when restricted to static, human-defined evaluation functions.

⚙️ Shinka Evolve Architecture 3 insights

Adaptive multi-model ensembling

The system dynamically selects between frontier LLMs including GPT and Gemini based on which model best suits specific program parents, significantly improving sample efficiency over single-model approaches.

Island populations with knowledge diffusion

Programs evolve in parallel islands with an archive database that diffuses successful mutations across populations, using LLMs to edit or cross-over code based on real-time evaluator feedback.

Self-adaptive evolution

The evolutionary algorithm co-evolves its own parameters during runtime, continuously adjusting model prioritization strategies as the optimization progresses.

⚠️ Autonomous AI Limitations 3 insights

Novelty stagnation in autonomous mode

LLMs running autonomously quickly plateau without generating interesting discoveries, remaining parasitic on their starting conditions and unable to escape local optima without human intervention.

Starting point sensitivity

Systems beginning with highly optimized solutions get trapped in local optima, while impoverished starting points enable greater diversity but require significantly longer optimization horizons.

Verification bottlenecks

Generating candidate solutions is computationally cheaper than hard-verifying correctness, creating fundamental bottlenecks for autonomous scientific discovery systems.

Bottom Line

Future AI systems must co-evolve problems and solutions from diverse, unconstrained starting points rather than merely optimizing fixed objectives, embracing open-endedness to achieve true autonomous scientific discovery.

More from Machine Learning Street Talk

View all
"Vibe Coding is a Slot Machine" - Jeremy Howard
1:26:40
Machine Learning Street Talk Machine Learning Street Talk

"Vibe Coding is a Slot Machine" - Jeremy Howard

Deep learning pioneer Jeremy Howard argues that 'vibe coding' with AI is a dangerous slot machine that produces unmaintainable code through an illusion of control, contrasting it with his philosophy that true software engineering insight emerges from interactive exploration (REPLs/notebooks) and deep engagement with models, drawing on his foundational ULMFiT research to demonstrate how understanding—not gambling—drives sustainable productivity.

22 days ago · 9 points
If You Can't See Inside, How Do You Know It's THINKING? [Dr. Jeff Beck]
46:57
Machine Learning Street Talk Machine Learning Street Talk

If You Can't See Inside, How Do You Know It's THINKING? [Dr. Jeff Beck]

Dr. Jeff Beck argues that agency cannot be verified from external behavior alone, requiring instead evidence of internal planning and counterfactual reasoning, while advocating for energy-based models and joint embedding architectures as biologically plausible alternatives to standard function approximation.

about 2 months ago · 10 points
Abstraction & Idealization: AI's Plato Problem [Mazviita Chirimuuta]
53:38
Machine Learning Street Talk Machine Learning Street Talk

Abstraction & Idealization: AI's Plato Problem [Mazviita Chirimuuta]

Mazviita Chirimuuta argues that AI's assumption of discoverable mathematical "source code" underlying messy reality repeats Plato's idealism, warning that scientific abstraction is a practical tool for limited human cognition rather than a window into eternal truths about mind or mechanism.

2 months ago · 8 points