"Vibe Coding is a Slot Machine" - Jeremy Howard

| Podcasts | March 03, 2026 | 126 Thousand views | 1:26:40

TL;DR

Deep learning pioneer Jeremy Howard argues that 'vibe coding' with AI is a dangerous slot machine that produces unmaintainable code through an illusion of control, contrasting it with his philosophy that true software engineering insight emerges from interactive exploration (REPLs/notebooks) and deep engagement with models, drawing on his foundational ULMFiT research to demonstrate how understanding—not gambling—drives sustainable productivity.

🎰 The 'Vibe Coding' Trap 3 insights

Slot machine psychology

AI coding tools create an illusion of control where developers craft prompts and MCPs but ultimately 'pull the lever,' gambling on code they cannot understand or maintain.

Productivity hype vs. reality

Despite claims of 50x productivity gains, empirical studies show only a 'tiny uptick' in actual software shipping, with no evidence of massive increases in high-quality output.

Betting on black boxes

Howard questions the professional wisdom of betting company products on code that 'no one understands,' noting that current AI systems are 'really bad at software engineering.'

🧠 Interactive Understanding 3 insights

The feedback loop of insight

Real understanding emerges from 'poking at a problem until it pushes back' through interactive environments like notebooks and REPLs, allowing developers to manipulate objects and build mental models.

LLMs cosplay comprehension

While language models excel at surface statistical correlations, they lack true hierarchical abstractions; they 'pretend to understand things' without the deep structural knowledge gained through interaction.

Human-AI learning parallels

Like humans learning new skills, models can learn specific tasks without catastrophic forgetting if engineers monitor activations and gradients rather than treating training as a mystery.

⚙️ ULMFiT & Transfer Learning 3 insights

Gradual unfreezing discipline

ULMFiT pioneered training only the last layer first, then gradually unfreezing earlier layers with discriminative learning rates (different rates per layer) to prevent destroying pre-trained representations.

General corpus prerequisite

Effective transfer learning requires pre-training on general-purpose corpora (like Wikipedia) rather than specialized domains, enabling the model to compress world knowledge into hierarchical abstractions for downstream tasks.

Visualizing dead neurons

Effective fine-tuning requires actively monitoring for 'dead neurons' (zero gradients) and training issues via activation visualization, which are fixable patterns rather than inevitable mysteries.

Bottom Line

Stop gambling with AI-generated code you don't understand and instead build software through interactive exploration, as sustainable engineering requires deep comprehension of the systems you create, not just prompts that pull a lever.

More from Machine Learning Street Talk

View all
Solving the Wrong Problem Works Better - Robert Lange
1:18:07
Machine Learning Street Talk Machine Learning Street Talk

Solving the Wrong Problem Works Better - Robert Lange

Robert Lange from Sakana AI explains how evolutionary systems like Shinka Evolve demonstrate that scientific breakthroughs require co-evolving problems and solutions through diverse stepping stones, while current LLMs remain constrained by human-defined objectives and fail to generate autonomous novelty.

12 days ago · 8 points
If You Can't See Inside, How Do You Know It's THINKING? [Dr. Jeff Beck]
46:57
Machine Learning Street Talk Machine Learning Street Talk

If You Can't See Inside, How Do You Know It's THINKING? [Dr. Jeff Beck]

Dr. Jeff Beck argues that agency cannot be verified from external behavior alone, requiring instead evidence of internal planning and counterfactual reasoning, while advocating for energy-based models and joint embedding architectures as biologically plausible alternatives to standard function approximation.

about 2 months ago · 10 points
Abstraction & Idealization: AI's Plato Problem [Mazviita Chirimuuta]
53:38
Machine Learning Street Talk Machine Learning Street Talk

Abstraction & Idealization: AI's Plato Problem [Mazviita Chirimuuta]

Mazviita Chirimuuta argues that AI's assumption of discoverable mathematical "source code" underlying messy reality repeats Plato's idealism, warning that scientific abstraction is a practical tool for limited human cognition rather than a window into eternal truths about mind or mechanism.

2 months ago · 8 points