Abstraction & Idealization: AI's Plato Problem [Mazviita Chirimuuta]

| Podcasts | January 23, 2026 | 14.8 Thousand views | 53:38

TL;DR

Mazviita Chirimuuta argues that AI's assumption of discoverable mathematical "source code" underlying messy reality repeats Plato's idealism, warning that scientific abstraction is a practical tool for limited human cognition rather than a window into eternal truths about mind or mechanism.

🧮 Abstraction vs Idealization 2 insights

Abstraction omits while idealization falsifies

Abstraction ignores known details like friction, while idealization attributes properties known to be false, such as assuming infinite populations in genetics calculations.

Mathematical models create cleaner fictions

Idealization presents reality as neater and more tractable than it actually is, risking conflation of these practical simplifications with discoveries of underlying truth.

🏛️ AI's Platonic Fallacy 2 insights

The kaleidoscope effect assumes hidden code

AI researchers often assume the universe operates on decomposable mathematical rules hidden beneath messy data, echoing Plato's contrast between eternal forms and flawed appearances.

Signal versus noise is a human decision

Classifying data as signal versus noise reflects subjective scientific choices about relevance rather than objective distinctions about which patterns are truly significant in nature.

⚠️ Historical Warnings from Neuroscience 2 insights

Reflex theory shows dangers of over-simplification

The reflex theory dominated late 19th-century neuroscience by idealizing all brain function as conditioned sensory-motor loops, despite Charles Sherrington admitting such simple reflexes likely don't exist in reality.

Lab results fail to generalize to real complexity

Mechanistic views that treat cognition as computational source code risk repeating historical errors by ignoring the environmental complexity and interactivity critical to real-world animal behavior.

🤝 Knowledge as Interaction 2 insights

Haptic realism emphasizes engagement over observation

Knowledge emerges through active manipulation and tactile engagement with the world, contrasting with passive "spectator" theories that treat vision as a model for disinterested knowing.

Science constructs through constrained interaction

Scientific understanding results from iterative interaction between human conceptual framing and natural constraints, not from simply reading off the universe's objective source code independent of human contribution.

Bottom Line

Approach AI and computational models as practical tools shaped by human cognitive limitations and constructive engagement rather than revelations of inevitable mathematical truths about the mind.

More from Machine Learning Street Talk

View all
Why AI's "12-Hour" Task Number Is a Mirage — Beth Barnes & David Rein
1:53:27
Machine Learning Street Talk Machine Learning Street Talk

Why AI's "12-Hour" Task Number Is a Mirage — Beth Barnes & David Rein

Beth Barnes and David Rein expose critical flaws in current AI benchmarks—such as data contamination, shortcutting, and adversarial selection bias—and propose the 'Time Horizon' framework, which measures AI progress by the length of economically relevant tasks models can complete, providing a more stable foundation for forecasting capabilities and risks.

5 days ago · 9 points
Solving the Wrong Problem Works Better - Robert Lange
1:18:07
Machine Learning Street Talk Machine Learning Street Talk

Solving the Wrong Problem Works Better - Robert Lange

Robert Lange from Sakana AI explains how evolutionary systems like Shinka Evolve demonstrate that scientific breakthroughs require co-evolving problems and solutions through diverse stepping stones, while current LLMs remain constrained by human-defined objectives and fail to generate autonomous novelty.

about 2 months ago · 8 points
"Vibe Coding is a Slot Machine" - Jeremy Howard
1:26:40
Machine Learning Street Talk Machine Learning Street Talk

"Vibe Coding is a Slot Machine" - Jeremy Howard

Deep learning pioneer Jeremy Howard argues that 'vibe coding' with AI is a dangerous slot machine that produces unmaintainable code through an illusion of control, contrasting it with his philosophy that true software engineering insight emerges from interactive exploration (REPLs/notebooks) and deep engagement with models, drawing on his foundational ULMFiT research to demonstrate how understanding—not gambling—drives sustainable productivity.

2 months ago · 9 points