Machine Learning Street Talk

Machine Learning Street Talk

209 K subscribers

MLST is the leading highly technical AI podcast. Subscribe now! Welcome! We bring you the latest in advanced AI research, from the best AI experts in the world. Our approach is unrivalled in terms of scope and rigour – we believe in diversity of ideas (which is to say, not just LLMs!) and we also cover other promising alternative paths to AGI, as well as CogSci, CompSci, Neuro, Mathematics, Philosophy of Mind and Language. Support us on Patreon for early access, exclusive content, private Discord, biweekly calls and much more! https://www.patreon.com/mlst Donate here: https://www.paypal.com/donate/?hosted_button_id=K2TYRVPBGXVNA Please email us to learn about sponsorship packages and deals. tim at mlst.ai (please put your budget in the subject line) Podcast booking agencies - *don't contact us* - we wouldn't even interview anyone who needed a booking agent. Media/influence agencies - *don't contact us* - we only work directly with brands/sponsors.

11 summaries available YouTube ← All channels
Videos Channels Newsletter
Solving the Wrong Problem Works Better - Robert Lange
1:18:07
Machine Learning Street Talk Machine Learning Street Talk

Solving the Wrong Problem Works Better - Robert Lange

Robert Lange from Sakana AI explains how evolutionary systems like Shinka Evolve demonstrate that scientific breakthroughs require co-evolving problems and solutions through diverse stepping stones, while current LLMs remain constrained by human-defined objectives and fail to generate autonomous novelty.

12 days ago · 8 points
"Vibe Coding is a Slot Machine" - Jeremy Howard
1:26:40
Machine Learning Street Talk Machine Learning Street Talk

"Vibe Coding is a Slot Machine" - Jeremy Howard

Deep learning pioneer Jeremy Howard argues that 'vibe coding' with AI is a dangerous slot machine that produces unmaintainable code through an illusion of control, contrasting it with his philosophy that true software engineering insight emerges from interactive exploration (REPLs/notebooks) and deep engagement with models, drawing on his foundational ULMFiT research to demonstrate how understanding—not gambling—drives sustainable productivity.

22 days ago · 9 points
If You Can't See Inside, How Do You Know It's THINKING? [Dr. Jeff Beck]
46:57
Machine Learning Street Talk Machine Learning Street Talk

If You Can't See Inside, How Do You Know It's THINKING? [Dr. Jeff Beck]

Dr. Jeff Beck argues that agency cannot be verified from external behavior alone, requiring instead evidence of internal planning and counterfactual reasoning, while advocating for energy-based models and joint embedding architectures as biologically plausible alternatives to standard function approximation.

about 2 months ago · 10 points
Abstraction & Idealization: AI's Plato Problem [Mazviita Chirimuuta]
53:38
Machine Learning Street Talk Machine Learning Street Talk

Abstraction & Idealization: AI's Plato Problem [Mazviita Chirimuuta]

Mazviita Chirimuuta argues that AI's assumption of discoverable mathematical "source code" underlying messy reality repeats Plato's idealism, warning that scientific abstraction is a practical tool for limited human cognition rather than a window into eternal truths about mind or mechanism.

2 months ago · 8 points
Why Every Brain Metaphor in History Has Been Wrong [SPECIAL EDITION]
42:05
Machine Learning Street Talk Machine Learning Street Talk

Why Every Brain Metaphor in History Has Been Wrong [SPECIAL EDITION]

The video argues that every historical model of the brain—from hydraulic pumps to modern computers—represents a "fallacy of misplaced concreteness" where useful technological metaphors are mistaken for literal biological reality, advocating instead for epistemic humility regarding whether nature is truly simple or merely intelligible through necessary human simplifications.

2 months ago · 9 points
AutoGrad Changed Everything (Not Transformers) [Dr. Jeff Beck]
1:16:38
Machine Learning Street Talk Machine Learning Street Talk

AutoGrad Changed Everything (Not Transformers) [Dr. Jeff Beck]

Dr. Jeff Beck argues that AutoGrad—not Transformers—enabled the modern AI revolution by turning neural network development into an engineering discipline, but current systems remain limited by function approximation alone; achieving human-like intelligence requires scalable Bayesian models structured like the brain and grounded in the causal physics of the world.

3 months ago · 10 points
Your Brain Doesn't Command Your Body. It Predicts It. [Max Bennett]
3:17:10
Machine Learning Street Talk Machine Learning Street Talk

Your Brain Doesn't Command Your Body. It Predicts It. [Max Bennett]

Max Bennett synthesizes evolutionary neuroscience and AI to argue that the brain operates as a predictive generative model rather than a passive sensory processor, where the neocortex enables 'learning by imagining' through mental simulations orchestrated in partnership with older brain structures.

3 months ago · 9 points
Why Scientists Can't Rebuild a Polaroid Camera [César Hidalgo]
1:37:06
Machine Learning Street Talk Machine Learning Street Talk

Why Scientists Can't Rebuild a Polaroid Camera [César Hidalgo]

César Hidalgo argues that knowledge follows three fundamental laws governing its growth, diffusion, and valuation, emphasizing that knowledge is collective, non-fungible, and embodied in networks rather than individuals or books—with critical implications for why development efforts fail and how organizations actually learn.

3 months ago · 10 points
PhD Bodybuilder Predicts The Future of AI (97% Certain) [Dr. Mike Israetel]
2:55:47
Machine Learning Street Talk Machine Learning Street Talk

PhD Bodybuilder Predicts The Future of AI (97% Certain) [Dr. Mike Israetel]

Dr. Mike Israetel argues with 97% certainty that Artificial Super Intelligence (ASI) will arrive by late 2026—defined as systems vastly exceeding humans in most cognitive domains—while debating whether true intelligence requires physical embodiment or merely abstract problem-solving capability.

3 months ago · 8 points
The "Final Boss" of Deep Learning
43:58
Machine Learning Street Talk Machine Learning Street Talk

The "Final Boss" of Deep Learning

Despite consuming hundreds of billions of operations per token, current large language models fail at reliable arithmetic and algorithmic reasoning, revealing a fundamental limitation that tool use cannot fix; the path forward requires categorical deep learning to provide the unifying theoretical framework that geometric deep learning cannot.

3 months ago · 10 points