Abstraction & Idealization: AI's Plato Problem [Mazviita Chirimuuta]

| Podcasts | January 23, 2026 | 12.5 Thousand views | 53:38

TL;DR

Mazviita Chirimuuta argues that AI's assumption of discoverable mathematical "source code" underlying messy reality repeats Plato's idealism, warning that scientific abstraction is a practical tool for limited human cognition rather than a window into eternal truths about mind or mechanism.

🧮 Abstraction vs Idealization 2 insights

Abstraction omits while idealization falsifies

Abstraction ignores known details like friction, while idealization attributes properties known to be false, such as assuming infinite populations in genetics calculations.

Mathematical models create cleaner fictions

Idealization presents reality as neater and more tractable than it actually is, risking conflation of these practical simplifications with discoveries of underlying truth.

🏛️ AI's Platonic Fallacy 2 insights

The kaleidoscope effect assumes hidden code

AI researchers often assume the universe operates on decomposable mathematical rules hidden beneath messy data, echoing Plato's contrast between eternal forms and flawed appearances.

Signal versus noise is a human decision

Classifying data as signal versus noise reflects subjective scientific choices about relevance rather than objective distinctions about which patterns are truly significant in nature.

⚠️ Historical Warnings from Neuroscience 2 insights

Reflex theory shows dangers of over-simplification

The reflex theory dominated late 19th-century neuroscience by idealizing all brain function as conditioned sensory-motor loops, despite Charles Sherrington admitting such simple reflexes likely don't exist in reality.

Lab results fail to generalize to real complexity

Mechanistic views that treat cognition as computational source code risk repeating historical errors by ignoring the environmental complexity and interactivity critical to real-world animal behavior.

🤝 Knowledge as Interaction 2 insights

Haptic realism emphasizes engagement over observation

Knowledge emerges through active manipulation and tactile engagement with the world, contrasting with passive "spectator" theories that treat vision as a model for disinterested knowing.

Science constructs through constrained interaction

Scientific understanding results from iterative interaction between human conceptual framing and natural constraints, not from simply reading off the universe's objective source code independent of human contribution.

Bottom Line

Approach AI and computational models as practical tools shaped by human cognitive limitations and constructive engagement rather than revelations of inevitable mathematical truths about the mind.

More from Machine Learning Street Talk

View all
Solving the Wrong Problem Works Better - Robert Lange
1:18:07
Machine Learning Street Talk Machine Learning Street Talk

Solving the Wrong Problem Works Better - Robert Lange

Robert Lange from Sakana AI explains how evolutionary systems like Shinka Evolve demonstrate that scientific breakthroughs require co-evolving problems and solutions through diverse stepping stones, while current LLMs remain constrained by human-defined objectives and fail to generate autonomous novelty.

12 days ago · 8 points
"Vibe Coding is a Slot Machine" - Jeremy Howard
1:26:40
Machine Learning Street Talk Machine Learning Street Talk

"Vibe Coding is a Slot Machine" - Jeremy Howard

Deep learning pioneer Jeremy Howard argues that 'vibe coding' with AI is a dangerous slot machine that produces unmaintainable code through an illusion of control, contrasting it with his philosophy that true software engineering insight emerges from interactive exploration (REPLs/notebooks) and deep engagement with models, drawing on his foundational ULMFiT research to demonstrate how understanding—not gambling—drives sustainable productivity.

22 days ago · 9 points
If You Can't See Inside, How Do You Know It's THINKING? [Dr. Jeff Beck]
46:57
Machine Learning Street Talk Machine Learning Street Talk

If You Can't See Inside, How Do You Know It's THINKING? [Dr. Jeff Beck]

Dr. Jeff Beck argues that agency cannot be verified from external behavior alone, requiring instead evidence of internal planning and counterfactual reasoning, while advocating for energy-based models and joint embedding architectures as biologically plausible alternatives to standard function approximation.

about 2 months ago · 10 points